GPU Support in Apache Spark and GPU/CPU Mixed Resource Scheduling at Production Scale

GPUs have been increasingly used in a broad area of applications, such as machine learning, image processing and risk analytics to achieve higher performance and lower costs (energy footprints). On the other hand, Apache Spark has become a very popular distributed application framework for data processing and complex analytics.

Yonggang Hu (IBM Distinguished Engineer) and Junfeng Liu (IBM Architect) share the slides from their recent Spark Summit talk that looked at GPU and Spark usage through the real world examples in cognitive computing and risk analytics in financial services - and also further explained the synergy between GPU and Spark.

Finally, the talk discusses the challenges with using GPU in production Spark applications and the enhancements to manage and utilize both CPU and GPU resources:

  • Support Spark MLlib on GPU
  • GPU specific resource group is introduced for facilitate mixed GPU/CPU resource management, e.g. identify the GPU sensitive stages and tasks from Spark DAG Scheduler
  • Enable GPU Python and Scala API for end user to declare computing logic that user wants to run on GPU devices, including shuffle configuration on transferring memory data between GPU and CPU
  • Enable GPU and CPU mixed resource scheduling in Spark to get better resource utilization.
  • Build the slot based and the multi-dimensional based solution for mixed scheduling to maximize resource usage.
  • Fail back capability, if the GPU fail to finish the job, it should be able to seamless back to CPU workload.
  • Speeding up capability, try GPU resource in the long tail case.

See the slides on SlideShare

Newsletter

You Might Also Enjoy

Kevin Bates
Kevin Bates
2 months ago

Limit Notebook Resource Consumption by Culling Kernels

There’s no denying that data analytics is the next frontier on the computational landscape. Companies are scrambling to establish teams of data scientists to better understand their clientele and how best to evolve product solutions to the ebb and flow of today’s business ecosystem. With Apache Hadoop and Apache Spark entrenched as the analytic engine and coupled with a trial-and-error model to... Read More

Gidon Gershinsky
Gidon Gershinsky
4 months ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More