Deciding about De/Serialization in PySpark Storage Levels

Serialization can save substantial space at the cost of some extra CPU time — and by default, PySpark uses the cPickle serializer. (The following link explains the general internal design of PySpark: PySpark_Internals.)

Prior to PySpark 2.0, the stored objects were always serialized regardless of whether you chose a serialized level. That means, the flag “deserialized” had no effect (as documented below: StorageLevel_configuration.) For that reason, in the StorageLevel options, the following option pairs had the same effects:

  • MEMORY_AND_DISK, MEMORY_AND_DISK_SER MEMORY_AND_DISK_2
    • MEMORYANDDISKSER2 MEMORY_ONLY*
    • MEMORYONLYSER MEMORYONLY2 *
  • MEMORY_ONLY_SER_2

In the latest master branch (after the release of 1.6.x), the changes in Pull Request #10092 reduce the number of exposed storage levels in PySpark. The following levels have been proposed for deprecation:

  • MEMORY_AND_DISK_SER
  • MEMORYANDDISKSER2*
  • MEMORYONLYSER*
  • MEMORYONLYSER_2*

Note that all the actual available storage levels in Python include MEMORYONLY, MEMORYONLY2, MEMORYANDDISK, MEMORYANDDISK2, DISKONLY, DISKONLY2 and OFFHEAP. All these remaining options set “deserialized” to false.

Biography: Dr. Xiao Li is an active Apache Spark committer from IBM Spark Technology Center. His main interests are on Spark, data replication and data integration. He received his Ph.D. from University of Florida in 2011. Xiao has over eight papers and eight patent applications in the field of data management.

Newsletter

You Might Also Enjoy

James Spyker
James Spyker
2 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More

Seth Dobrin
Seth Dobrin
2 months ago

Non-Obvious Application of Spark™ as a Cloud-Sync Tool

When most people think about Apache Spark™, they think about analytics and machine learning. In my upcoming talk at Spark Summit East, I'll talk about leveraging Spark in conjunction with Kafka, in a hybrid cloud environment, to apply the batch and micro-batch analytic capabilities to transactional data in place of performing traditional ETL. This application of these two open source tools is a no... Read More