Running Apache Zeppelin in K8s cluster and integration with YARN cluster

Useful way for implementing CI/CD pipeline is to pack code as docker and run in K8s cluster. One very practical application for data analytics is notebook based tool Apache Zeppelin. Every business department requires own configuration for zeppelin. Hence, there is an idea to create docker containers for every department and run in k8s cluster.

Apache Zeppelin uses Spark as a computational engine for big data. So you need to submit spark jobs remotely to yarn cluster. Detailed description how to do this is here.

Unfortunately this approach cannot be applied for submitting spark jobs directly. Starting spark from docker in k8s from zeppelin and remotely submit to yarn has some issues to solve.

Let’s make an overview of physical deployment. Apache Zeppelin is running in K8s inside of pod with virtual IP address (ip_k8s_pod_cni), which is assigned by container network interface. This pod is running on physical machine with own ip (ip_k8s_host_lan). YARN is a part of cluster with own IP range and part of LAN. Let’s assume that one node in cluster has ip_yarn_node_lan.

Assume that we start Spark in YARN mode with deploy mode cluster. In this case spark driver is started directly in YARN cluster and has IP in YARN cluster (ip_yarn_node_lan). Problem with this setup is that Spark driver should talk to Apache Zeppelin. In this case, callback connection between ip_k8s_pod_cni and ip_yarn_node_lan should be established, but there is no way to specify this in spark, since container network does not know about LAN.

Let’s submit spark with YARN and deploy mode client. In this case spark driver is started directly in K8s pod. Spark driver should connect to spark workers, which are running in YARN cluster. It looks similar to previous case. We should establish connection between ip_k8s_pod_cni (IP of Spark driver) and ip_yarn_node_lan (IP of Spark worker). But here Spark provides two additional parameters: spark.driver.host and spark.driver.bindAddress. Worker will connect to spark.driver.host here we should put ip_k8s_host_lan and then from here connection will be routed to ip_k8s_pod_cni, which we specify in spark.driver.bindAddress.

One open question is what about ports. We should expose ports for Spark master in Docker and they should be available in K8s, and Spark should know about them. There is a parameter in Spark spark.driver.port where we can put port number, for example 18080. If this port will be busy Spark will check next one 18081. So we can expose in Docker and K8s service 5 ports: 18080, 18081, 18082, 18083, 18084. In this case problem of ports for Spark master will be solved.

There is one drawback if this approach, we use ip_k8s_pod_cni, which is changed every time, when pod is restarted. We investigate right now how to use instead of this IP, IP of corresponding K8s service.

One more approach to submit Spark jobs remotely is to use Apache Livy. Apache Livy runs inside of YARN cluster and provides REST to call submit Spark. Drawback of Livy is dependency. All jars should be already available in YARN cluster.