open source

Apache SystemML

Apache SystemML provides declarative large-scale machine learning (ML) that aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single-node, in-memory computations to distributed computations on Apache Hadoop and Apache Spark. It is a project undergoing Incubation at The Apache Software Foundation (ASF)*

SystemML

Algorithm Customizability

ML algorithms are expressed in an R-like or Python-like syntax that includes linear algebra primitives, statistical functions, and ML-specific constructs. This high-level language significantly increases the productivity of data scientists as it provides (1) full flexibility in expressing custom analytics, and (2) data independence from the underlying input formats and physical data representations. Automatic optimization according to data and cluster characteristics ensures both efficiency and scalability.

Poisson Nonnegative Matrix Factorization in SystemML’s R-like Syntax

 while (iter < max_iterations) {
  iter = iter + 1;
  H = (H * (t(W) %*% (V/(W%*%H)))) / t(colSums(W));
  W = (W * ((V/(W%*%H)) %*% t(H))) / t(rowSums(H));
  obj = as.scalar(colSums(W) %*% rowSums(H)) - sum(V * log(W%*%H));
  print("iter=" + iter + " obj=" + obj);
} 

Multiple Execution Modes

SystemML computations can be executed in a variety of different modes. To begin with, SystemML can be operated in Standalone mode on a single machine, allowing data scientists to develop algorithms locally without need of a distributed cluster. Algorithms can be distributed across Hadoop or Spark. This flexibility allows the utilization of an organization’s existing resources and expertise. In addition, SystemML can be operated via Java and Scala. SystemML also features an embedded API for scoring models.

 // Standalone
./bin/systemml test.dml

// Spark
$SPARK_HOME/bin/spark-submit SystemML.jar -f test.dml -exec hybrid_spark

// Hadoop MapReduce
hadoop jar SystemML.jar -f test.dml -exec hybrid 

Automatic Optimization

Algorithms specified in DML and PyDML are dynamically compiled and optimized based on data and cluster characteristics using rule-based and cost-based optimization techniques. The optimizer automatically generates hybrid runtime execution plans ranging from in-memory single-node execution to distributed computations on Spark or Hadoop. This ensures both efficiency and scalability. Automatic optimization reduces or eliminates the need to hand-tune distributed runtime execution plans and system configurations.

Github Get Started Read More

*Apache SystemML is an effort undergoing Incubation at The Apache Software Foundation (ASF), sponsored by the Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

Newsletter

You Might Also Enjoy

James Spyker
James Spyker
2 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More

Seth Dobrin
Seth Dobrin
2 months ago

Non-Obvious Application of Spark™ as a Cloud-Sync Tool

When most people think about Apache Spark™, they think about analytics and machine learning. In my upcoming talk at Spark Summit East, I'll talk about leveraging Spark in conjunction with Kafka, in a hybrid cloud environment, to apply the batch and micro-batch analytic capabilities to transactional data in place of performing traditional ETL. This application of these two open source tools is a no... Read More