research

Apache Spark SQL Analyzer Resolves Order-by Column

The Apache Spark SQL component has several sub-components including Analyzer, which plays an important role in making sure that the logical plan is fully resolved at the end of an analysis phase. Analyzer takes a parsed logical plan as input and makes sure all the table references, attributes/column references, and function references are resolved by looking up the metadata from catalogs. It works by applying a set of rules on the logical plan — and transforming it on each stage in order to resolve specific portions of the plan.

We’ll examine the workings of Analyzer by taking an example defect and describing how we addressed the problem.

Example Query:

select a as a1 , c as a2, count(c) as a3 from tab group by a, b order by a1, c

Problem Description:

In this case, Analyzer was unable to resolve the attributes referenced in the order by clause. To see why, let’s look at the underlying parsed logical plan.

Parsed Logical plan:

'Sort ['a1 ASC,'c ASC], true +- 'Aggregate ['a,'c], ['a AS a1#17,'c AS a2#18, (count('a),mode=Complete,isDistinct=false) AS a3#19] +- LocalRelation [a#1,b#2,c#3,d#4,e#5]

In this case, only the LocalRelation is resolved. None of the other plan operators are resolved since the underlying attributes they refer to are not resolved. However, we can see that the Sort operator is above the Aggregate Operator and the attributes referenced by the Sort operator was being resolved from the outputs of its child (the Aggregate operator). The output of the Aggregate operator is ‘a1#17’, ‘a2#18’ and ‘a3#19’ in the above plan which is missing the attribute ‘C#3’, which is referenced by the Sort operator. That causes a failure in the analysis process which in turn results in query failure.

In order to properly resolve the Sort operator, we need to make sure that…

  • `a1 in Sort is resolved from its immediate child (Aggregate)
  • `c in Sort is resolved from its grandchild (Local Relation)

In Spark Analyzer, the ResolveAggregateFunctions rule was modified in order to properly resolve the Sort operator and the query results in following analyzed logical plan after the fix.

Project [a1#14,a2#15,a3#16L] +- Sort [a1#14 ASC,a2#15 ASC], true +- Aggregate [a#1,c#3], [a#1 AS a1#14,c#3 AS a2#15, (count(a#1),mode=Complete,isDistinct=false) AS a3#16L] +- LocalRelation [a#1,b#2,c#3,d#4,e#5]

Conclusion:

Hopefully this blog gives a brief insight into the workings of Analyzer. We’ll post a more extended description of Analyzer in the future. In general, handling Analyzer issues requires a deep understanding of Spark logical plans.

About the Author:

Dilip Biswal is a Senior software engineer at the Spark Technology Center at IBM. He is an active Apache Spark contributor and works in the open source community. He is experienced in Relational Databases, Distributed Computing and Big Data Analytics.  He has extensively worked on SQL engines like Informix, Derby, and Big SQL. *
*

Newsletter

You Might Also Enjoy

Gidon Gershinsky
Gidon Gershinsky
21 days ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More

James Spyker
James Spyker
3 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More