GPU Support in Apache Spark and GPU/CPU Mixed Resource Scheduling at Production Scale

GPUs have been increasingly used in a broad area of applications, such as machine learning, image processing and risk analytics to achieve higher performance and lower costs (energy footprints). On the other hand, Apache Spark has become a very popular distributed application framework for data processing and complex analytics.

Yonggang Hu (IBM Distinguished Engineer) and Junfeng Liu (IBM Architect) share the slides from their recent Spark Summit talk that looked at GPU and Spark usage through the real world examples in cognitive computing and risk analytics in financial services - and also further explained the synergy between GPU and Spark.

Finally, the talk discusses the challenges with using GPU in production Spark applications and the enhancements to manage and utilize both CPU and GPU resources:

  • Support Spark MLlib on GPU
  • GPU specific resource group is introduced for facilitate mixed GPU/CPU resource management, e.g. identify the GPU sensitive stages and tasks from Spark DAG Scheduler
  • Enable GPU Python and Scala API for end user to declare computing logic that user wants to run on GPU devices, including shuffle configuration on transferring memory data between GPU and CPU
  • Enable GPU and CPU mixed resource scheduling in Spark to get better resource utilization.
  • Build the slot based and the multi-dimensional based solution for mixed scheduling to maximize resource usage.
  • Fail back capability, if the GPU fail to finish the job, it should be able to seamless back to CPU workload.
  • Speeding up capability, try GPU resource in the long tail case.

See the slides on SlideShare

Newsletter

You Might Also Enjoy

James Spyker
James Spyker
2 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More

Seth Dobrin
Seth Dobrin
2 months ago

Non-Obvious Application of Spark™ as a Cloud-Sync Tool

When most people think about Apache Spark™, they think about analytics and machine learning. In my upcoming talk at Spark Summit East, I'll talk about leveraging Spark in conjunction with Kafka, in a hybrid cloud environment, to apply the batch and micro-batch analytic capabilities to transactional data in place of performing traditional ETL. This application of these two open source tools is a no... Read More