SETI + Apache Spark™

The SETI Institute’s mission is to explore, understand and explain the origin and nature of life in the universe. The IBM jStart team has joined with the SETI Institute to develop an Apache Spark application to analyze 100 million radio events detected over several years. These events look for faint signals, which may betray the presence of intelligent extraterrestrial life. The complex nature of the data demands sophisticated mathematical models to find faint signals, and machine-learning algorithms to separate terrestrial interference from signals truly of interest.

This application uses the iPython Notebook service on Apache Spark, deployed on IBM Cloud Data Services (CDS). Data is loaded into the CDS object store in a format that facilitates signal processing and experimentation. Data scientists from NASA Space Science Division, Penn State, and IBM Research build and refine analytic methodologies using iPython notebooks. These notebooks create a self-documenting repository of signal processing research that is collaboratively searched, referenced, and improved.

SETI + Spark Explore Space
SETI + Spark Explore Space

Read more about the data collected from SETI here: Here


  • Graham Mackintosh


You Might Also Enjoy

Gidon Gershinsky
Gidon Gershinsky
2 months ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More

James Spyker
James Spyker
4 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More