Deciding about De/Serialization in PySpark Storage Levels

Serialization can save substantial space at the cost of some extra CPU time — and by default, PySpark uses the cPickle serializer. (The following link explains the general internal design of PySpark: PySpark_Internals.)

Prior to PySpark 2.0, the stored objects were always serialized regardless of whether you chose a serialized level. That means, the flag “deserialized” had no effect (as documented below: StorageLevel_configuration.) For that reason, in the StorageLevel options, the following option pairs had the same effects:

  • MEMORY_AND_DISK, MEMORY_AND_DISK_SER MEMORY_AND_DISK_2
    • MEMORYANDDISKSER2 MEMORY_ONLY*
    • MEMORYONLYSER MEMORYONLY2 *
  • MEMORY_ONLY_SER_2

In the latest master branch (after the release of 1.6.x), the changes in Pull Request #10092 reduce the number of exposed storage levels in PySpark. The following levels have been proposed for deprecation:

  • MEMORY_AND_DISK_SER
  • MEMORYANDDISKSER2*
  • MEMORYONLYSER*
  • MEMORYONLYSER_2*

Note that all the actual available storage levels in Python include MEMORYONLY, MEMORYONLY2, MEMORYANDDISK, MEMORYANDDISK2, DISKONLY, DISKONLY2 and OFFHEAP. All these remaining options set “deserialized” to false.

Biography: Dr. Xiao Li is an active Apache Spark committer from IBM Spark Technology Center. His main interests are on Spark, data replication and data integration. He received his Ph.D. from University of Florida in 2011. Xiao has over eight papers and eight patent applications in the field of data management.

Newsletter

You Might Also Enjoy

Kevin Bates
Kevin Bates
4 months ago

Limit Notebook Resource Consumption by Culling Kernels

There’s no denying that data analytics is the next frontier on the computational landscape. Companies are scrambling to establish teams of data scientists to better understand their clientele and how best to evolve product solutions to the ebb and flow of today’s business ecosystem. With Apache Hadoop and Apache Spark entrenched as the analytic engine and coupled with a trial-and-error model to... Read More

Gidon Gershinsky
Gidon Gershinsky
5 months ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More