Apache Spark™ 2.0: New and Noteworthy in the Latest Release

Since its launch original launch in 2009, Apache Spark™ has made phenomenal strides, driven in large part by a passionate open-source community. That community has now come together to offer Apache Spark™ 2.0 — and with this latest release, the codebase takes a huge leap forward.



To give you a sense of the scope and scale of this release — 


As per our latest count, there were 2590 JIRAs (new features and bug fixes), from 309 contributors worldwide in Apache Spark™ 2.0 
Here at the Spark Technology Center, we have very detailed metrics of this release. As an example, did you know that the average lifecycle of a JIRA (creation to resolution) in 2.0 release was 63 days? And do you know how that compares to 1.6 release?


Christian Kadner of the Spark Tech Center team digs deep into the Git logs and JIRA metrics, and will share an update on the same. 
In fact, over the next few weeks, the Spark Technology Center team will share their analysis of many of the significant features of Apache Spark™, and their outlook on the same.



The Spark Technology Center focuses efforts on expanding Spark's core technology to make it enterprise and cloud ready — with the aim of accelerating the business value of Spark and driving intelligence into business applications. With our growing pool of contributors (50 team members worldwide — including two committers), we've crunched out over 422 commits to Spark 2.0 in the areas of Spark Core, Spark R, SQL, MLlib, Streaming, PySpark, and more. You can always see the latest at http://jiras.spark.tc
.

All this amounts to over 18,600 lines of new code in the 2.0 release. Our largest contribution is in the area of Spark SQL with over 10,200 lines of new code, followed by Machine Learning (Spark ML and PySpark) with over 6,900 lines of new code.  



Here are some of the features of Spark SQL where the Spark Technology Center has major contributions. (Detailed blogs to follow on these.)



  • Comprehensive native SQL parser
  • 
Native support for DDL commands
  • 
Native view support with SQL generation
  • Enhanced catalyst Analyzer/Optimizer
  • 
Native support for bucketed table
  • Enhanced error handling and test coverage

We are also actively involved in the following features:

  • New SparkSession replacing the SQLContext
  • Whole-stage code generation
  • DataFrame/Dataset API 
  • Additional SQL2003 compliance support
  • Subquery enhancement
  • Vectorized parquet encoder

Check out the Apache Spark 2.0 release notes.


We are already hearing positive feedback on the performance improvements of 2.0, and excitement about the new capabilities. And here at the Spark Technology Center, we are continuing on our journey to make Apache Spark™ the Analytics Operating System. 



Sincere acknowledgements to the extended Spark Technology Center team for pulling together the content, editing, and reviews of this blog. 

See also:

Machine Learning in Apache Spark 2.0: Under the Hood and Over the Rainbow, by Nick Pentreath

Coming soon:

Berni Schiefer on Spark SQL performance in Spark 2.0, and Prashant Sharma's insights and recent work on Structured Streaming in Spark 2.0.

Register for Spark Summit EU

Newsletter

You Might Also Enjoy

Gidon Gershinsky
Gidon Gershinsky
2 months ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More

James Spyker
James Spyker
4 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More