Apache incubator

Announcing Apache Toree

I’m pleased to announce that in late 2015, Apache Spark Kernel was accepted by Apache as an incubator project. As part of this transition, the Apache Spark Kernel project was renamed Toree. As an Apache incubator, Toree will continue to focus on enabling interactive workloads between applications and a Spark cluster.

But, what are interactive workloads and what are these applications?

I find it easier to explain the concept of interactive applications by explaining what it is not. Lets take, for example, Hadoop Map/Reduce. Applications, and end-users using Hadoop Map/Reduce, run jobs by typically

  1. taking some code,
    1. packaging it up,
    2. submitting the package to a JobServer,
    3. monitoring the state of the job,
    4. waiting for job to complete, and then
    5. reading the results from wherever the job stored them.

BatchWorkloadsThis style of workload is reffered to as a Batch job. In many cases, a sequence of Batch jobs are connected to each other by reading and writing to a persistent store, such as HDFS. This is certainly a very common and valid workload for many use-cases, and it is one that is also often applied to Spark.

When it comes to applications that enable data exploration and ad-hoc analytics, Batch workloads tend to introduce lots of friction. For example, packaging and submitting of jobs, as well as the reading and writing from storage tend to introduce unwanted latencies. Spark alleviates some of the frictions by relying on memory to hold data along with the concept of a SparkContext as a way to tie jobs together. What is missing from Spark is a way for applications to interact with a long living SparkContext . This is where Toree comes into play.

Toree provides a communication channel between an application and a SparkContext that allows access to the entire Spark API in a variety of languages, including Scala, Python, R and soon JavaScript. Through this channel, the application interacts with Spark by exchanging code and data.

InteractiveWorkloadsThe Jupyter Notebook is a great example of an application, that with the help of Toree, can provide the full power of Spark to its end user. Toree can be installed out-of-the-box into Jupyter to enable support for Spark.

I look forward to growing the Toree community, both users and contributors to the project. I’m very interested in seeing what new types of applications you build with Toree. Visit the site and join the community.


You Might Also Enjoy

Kevin Bates
Kevin Bates
9 months ago

Limit Notebook Resource Consumption by Culling Kernels

There’s no denying that data analytics is the next frontier on the computational landscape. Companies are scrambling to establish teams of data scientists to better understand their clientele and how best to evolve product solutions to the ebb and flow of today’s business ecosystem. With Apache Hadoop and Apache Spark entrenched as the analytic engine and coupled with a trial-and-error model to... Read More

Gidon Gershinsky
Gidon Gershinsky
10 months ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More