open source

Jupyter + Apache Spark™ + Mesos: An "Opinionated" Docker Image

This post originally appeared on

KDNuggets


Project Jupyter has been steadily growing in popularity since it spun it off from IPython in 2014. It currently boasts over 200 committers and over 9000 commits to its notebook repository on github — not to mention 597 Jupyter-tagged questions on stack overflow. (See the post co-written by Peter Parente and Gino Bustelo for more about what makes Juypyter exciting.)

Despite this popularity and traction, there was a general feeling in the open-source community that Jupyter didn’t play well with Apache Spark™, in particular users had the impression that you couldn’t access Spark through a Jupyter Notebook. That perception struck IBM’s Emerging Technology (ET) team as strange. After all, Spark is like any other library that coders would want to use in a notebook.

The ET team reached out to the community to propose and discuss the idea of offering a set of Docker-based stacks for Jupyter, including one to combine Jupyter and Spark right out of the gate. The community took up the challenge and within a week the effort was underway. The different images combine various pieces for convenience. These came to be called “opinionated” Docker images since rather than keeping Jupyter perfectly agnostic, the images bolted together technology that the ET team and the community knew would fit well — and that they hoped would make life easier.

For the Jupyter+Spark “all-spark-notebook“, Apache Mesos was added to do cluster management for Spark. Incorporating Mesos meant users could pull the image and reasonably expect to set up an enterprise-level architecture with it. Of course, they don’t have to use Mesos if they don’t want to; the Docker container still supports connecting to a Spark standalone cluster (as described here), but if users do decide to deploy Spark on Mesos, the container can simply connect to the Mesos cluster without extra configuration.

The Jupyter community knew that the images would be handy — and they hoped the images would be discovered and used by other coders — but they were in for a surprise. When the team checked back last month, they saw that the Jupyter+Spark+Mesos container had been pulled over 150K times. (It’s currently up over 168K.) At first, they thought it might be a bot pulling the image again and again, but then noticed that the other images that had been posted weren’t getting pulled nearly as much. In fact, the all-spark-notebook container was getting pulled ten times as much as any of the other posted images. A nice feeling. :-)

Now, with the help of the community, the containers continue to get updated and maintained. The Jupyter community invites you to pull the Spark image (or the others) to try them out. Feel free to post any issues:

Repo: https://github.com/jupyter/docker-stacks/tree/master/all-spark-notebook

Issues: https://github.com/jupyter/docker-stacks/issues

Newsletter

You Might Also Enjoy

Kevin Bates
Kevin Bates
2 months ago

Limit Notebook Resource Consumption by Culling Kernels

There’s no denying that data analytics is the next frontier on the computational landscape. Companies are scrambling to establish teams of data scientists to better understand their clientele and how best to evolve product solutions to the ebb and flow of today’s business ecosystem. With Apache Hadoop and Apache Spark entrenched as the analytic engine and coupled with a trial-and-error model to... Read More

Gidon Gershinsky
Gidon Gershinsky
4 months ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More