project

PixieDust: Easy access and visualizations for Jupyter notebooks in the cloud

PixieDust is an open-source Python helper library that works as an add-on to Jupyter notebooks to improve the user experience of working with data. It also provides extra capabilities that fill a gap when the notebook is hosted on the cloud and the user has no access to configuration files. (Check out the project's github repository.)

Its current capabilities include:

  • packageManager. Lets you install Apache Spark™ packages inside a Python notebook. This is something that you can't do today on hosted Jupyter notebooks, a gap that prevents developers from using a large number of Spark package add-ons. Note that you can install the packages or plain jars in your Notebook Python kernel without the need to modify a configuration file:

alt

  • Visualizations. One single API called display() lets you visualize your Spark object in different ways: table, charts, maps, and so on. This module is designed to be extensible, providing an API that lets anyone easily contribute a new visualization plugin.

alt

  • Export. Easily download data to .csv, html, json, etc. locally on your laptop or into a variety of back-end data sources, like Cloudant, dashDB, GraphDB, Object Storage, and so on.

alt

  • Scala Bridge. Use Scala directly in your Python notebook. Variables are automatically transferred from Python to Scala and vice versa:

First define the variable in Python...

pythonVar = “pixiedust”  

Then in Scala...

%%scala
val demo = com.ibm.cds.spark.samples.StreamingTwitter  
demo.setConfig("twitter4j.oauth.consumerKey",”XXXXX")  
demo.setConfig("twitter4j.oauth.consumerSecret",”XXXXX")  
demo.setConfig("twitter4j.oauth.accessToken",”XXXXX")  
demo.setConfig("twitter4j.oauth.accessTokenSecret",”XXXXX")  
demo.setConfig("watson.tone.url","https://watsonplatform.net/tone-analyzer/api")  
demo.setConfig("watson.tone.password",”XXXXX")  
demo.setConfig("watson.tone.username",”XXXX”)

import org.apache.spark.streaming._  
demo.startTwitterStreaming(sc, Seconds(10))

println(pythonVar)

val __fromScalaVar = “Hello from Scala”  

And back to Python to use the Scala variable...

print(__fromScalaVar)  
  • Extensibility. Create your own visualizations using the PixieDust extensibility APIs. If you know HTML, CSS, and JavaScript, you can write and deliver amazing graphics without forcing notebook users to type one line of code. Use the shape of the data to control when PixieDust shows your visualization in a menu.

alt

This sample visualization plugin can use d3 to show the different flight routes for each airport:

alt

  • Embed Applications. Encapsulate your analytics into compelling user interfaces better suited for line-of-business users:

alt

Note: PixieDust currently works with Spark 1.6 and Python 2.7.

Note: PixieDust currently supports Spark DataFrames, Spark GraphFrames, and Pandas DataFrames, with more to come. If you can't wait, write your own today and contribute it back.

Newsletter

You Might Also Enjoy

James Spyker
James Spyker
2 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More

Seth Dobrin
Seth Dobrin
2 months ago

Non-Obvious Application of Spark™ as a Cloud-Sync Tool

When most people think about Apache Spark™, they think about analytics and machine learning. In my upcoming talk at Spark Summit East, I'll talk about leveraging Spark in conjunction with Kafka, in a hybrid cloud environment, to apply the batch and micro-batch analytic capabilities to transactional data in place of performing traditional ETL. This application of these two open source tools is a no... Read More