Deciding about De/Serialization in PySpark Storage Levels

Serialization can save substantial space at the cost of some extra CPU time — and by default, PySpark uses the cPickle serializer. (The following link explains the general internal design of PySpark: PySpark_Internals.)

Prior to PySpark 2.0, the stored objects were always serialized regardless of whether you chose a serialized level. That means, the flag “deserialized” had no effect (as documented below: StorageLevel_configuration.) For that reason, in the StorageLevel options, the following option pairs had the same effects:

  • MEMORY_AND_DISK, MEMORY_AND_DISK_SER MEMORY_AND_DISK_2
    • MEMORYANDDISKSER2 MEMORY_ONLY*
    • MEMORYONLYSER MEMORYONLY2 *
  • MEMORY_ONLY_SER_2

In the latest master branch (after the release of 1.6.x), the changes in Pull Request #10092 reduce the number of exposed storage levels in PySpark. The following levels have been proposed for deprecation:

  • MEMORY_AND_DISK_SER
  • MEMORYANDDISKSER2*
  • MEMORYONLYSER*
  • MEMORYONLYSER_2*

Note that all the actual available storage levels in Python include MEMORYONLY, MEMORYONLY2, MEMORYANDDISK, MEMORYANDDISK2, DISKONLY, DISKONLY2 and OFFHEAP. All these remaining options set “deserialized” to false.

Biography: Dr. Xiao Li is an active Apache Spark committer from IBM Spark Technology Center. His main interests are on Spark, data replication and data integration. He received his Ph.D. from University of Florida in 2011. Xiao has over eight papers and eight patent applications in the field of data management.

Newsletter

You Might Also Enjoy

Gidon Gershinsky
Gidon Gershinsky
21 days ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More

James Spyker
James Spyker
3 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More