spark

SETI + Apache Spark™

The SETI Institute’s mission is to explore, understand and explain the origin and nature of life in the universe. The IBM jStart team has joined with the SETI Institute to develop an Apache Spark application to analyze 100 million radio events detected over several years. These events look for faint signals, which may betray the presence of intelligent extraterrestrial life. The complex nature of the data demands sophisticated mathematical models to find faint signals, and machine-learning algorithms to separate terrestrial interference from signals truly of interest.

This application uses the iPython Notebook service on Apache Spark, deployed on IBM Cloud Data Services (CDS). Data is loaded into the CDS object store in a format that facilitates signal processing and experimentation. Data scientists from NASA Space Science Division, Penn State, and IBM Research build and refine analytic methodologies using iPython notebooks. These notebooks create a self-documenting repository of signal processing research that is collaboratively searched, referenced, and improved.

SETI + Spark Explore Space
SETI + Spark Explore Space

Read more about the data collected from SETI here: Here

Team

  • Graham Mackintosh

Newsletter

You Might Also Enjoy

James Spyker
James Spyker
2 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More

Seth Dobrin
Seth Dobrin
2 months ago

Non-Obvious Application of Spark™ as a Cloud-Sync Tool

When most people think about Apache Spark™, they think about analytics and machine learning. In my upcoming talk at Spark Summit East, I'll talk about leveraging Spark in conjunction with Kafka, in a hybrid cloud environment, to apply the batch and micro-batch analytic capabilities to transactional data in place of performing traditional ETL. This application of these two open source tools is a no... Read More