Scoring Heart Disease with Apache Spark™ (Part 2/3)

This is part 2 of a practical guide to scoring health data with Apache Spark™. We'll post the final part of the guide in the coming weeks. (See part 1 here.)

As a reminder, these posts take inspiration from an R tutorial here.

The guide is divided into four parts:

  1. Objective description of the study and data
  2. Data preparation and initial analysis
  3. Construction and validation of the score
  4. Interpretation of results

For the code I reference, visit the full github repository here.

Let's continue with step 3: Searching for meaningful explanatory values...

3. Search for meaningful explanatory variables

Let's try to identify possible correlations between descriptors, starting with a look at our age variable:'age,'chd).count.orderBy('age))  


This is a bit hard to read. So now we can use Zeppelin plotting functionalities to help us take a better look at our variable, using the histogram view.


This is still meaningless. We can open the visualization settings to choose what Keys, Groups, and Values we would want to display. For the age variable, we will choose the age as a key, chd (target variable) for groups, and count(sum) for values.


Now let's do the same for sbp, alcohol, tobacco, ldl, and obesity:



Take care when when conducting graph analysis for the purpose of detecting possible co-linearities. The variables to consumption of alcohol and the quantity of tobacco seem to be distributed in the same way, as are the distributions for cholesterol and obesity.

Another analysis tool is to perform point cloud for all variables. One can possibly color the points according to the target variable. I use seaborn with some helper visualization scripts to support such plotting. The scripts are available on the project github page under notebooks.

Pair plot: alt

Qplot: alt

4. Outliers and missing values

Outliers depend on the distribution and on the object of the study. In the literature, the treatment of missing values or outliers is sometimes subject to endless discussions which practitioners should consider. It is easier, in general, to decide what is an outlier with some domain-knowledge. Let's look again basic statistics:

famhist and chd are both categorical variables so we will drop them from the statistical description:"chd").drop("famhist").describe())  


The distribution of tobacco consumption is very spread out - as it is for alcohol. Other distributions seem rather consistent. So, for now, we do nothing on those values, given the distribution.

This data set doesn't contain missing values nor visible outliers.

5. Discretize or not?

This is a common issue in the exploratory analysis. Discretizing continuous variables.

In the data set, the follow variables could potentially be discretized:

  • age
  • tobacco
  • sbp
  • alcohol
  • obesity
  • ldl
  • adiposity
  • typea

Should we discretize continuous variables? Yes, mostly. But how? In line with the target variable? For business knowledge? Distribution based on quantiles?

There's no definitive answer. From a general point of view, which method you choose will generally depend on the problem — and how much time you want to spend. Always remember: Consider your actual results and don't hesitate to reconsider (and reverse) any cutting you do.

Let's consider the variable age, which is the simplest to discretize. Heart issues are not uniformly common across age groups, as shown here. Let's discretize the variables age and tobacco, distinguishing between light, medium, and heavy smokers.

With Spark we can do both actions separately or using Pipeline from Spark ML (later on that in the following segment).

For now, we will just use the QuantileDiscretizer for that purpose, as follows:

import org.apache.spark.sql.types.DoubleType

val ageDiscretizer = new QuantileDiscretizer()  

val result ="age",$"age".cast(DoubleType))).transform(encoded.withColumn("age",$"age".cast(DoubleType)))$"age".asc))  


As you've noticed we converted our age variable into a double precision floating point format. Otherwise, we'd get the following error:

scala.MatchError: [52] (of class org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema)  

(Which is ambiguous if you ask me.)

We will perform the same action on the tobacco variable (no need to cast here).

val tobaccoDiscretizer = new QuantileDiscretizer()  

val result =$"tobacco_discret".asc))  


The category of people under 15 years old is not at all representative in the sample, nor is the category of those under 30.

Half of people over 45 are suffering from a heart problem.

One could consider that heart problems are simply hereditary. But of course we find that smoking has a real influence on heart problems since it is significantly higher among smokers, even regardless of the amount of tobacco they consumed.

This initial analysis indicate that:

  • It is not very useful to keep the sample in individuals under 15 years, because the model we developed is not calibrated to predict the likelihood of developing heart disease if age plays a role.
  • Therefore, we can see some results here descriptively, but we have to be able to confirm them in a modeling phase. Our baseline will be:
val baseline = step2.filter($"age">15).drop("age").drop("tobacco").drop("chd").drop("famhist")  


We'll pause there. In the final part of this post, we'll talk about sampling, modelization, model validation, and more.


You Might Also Enjoy

Gidon Gershinsky
Gidon Gershinsky
21 days ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More

James Spyker
James Spyker
3 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More