Community Spring Cleaning week is here! Join your fellow Maveryx in digging through your old posts and marking comments on them as solved. Learn more here!

Alteryx Designer Desktop Knowledge Base

Definitive answers from Designer Desktop experts.

Standardization in Cluster Analysis

SydneyF
Alteryx Alumni (Retired)
Created

In statistics, standardization(sometimes called data normalization or feature scaling) refers to the process of rescaling the values of the variables in your data set so they share a common scale. Often performed as a pre-processing step, particularly for cluster analysis, standardization may be important if you are working with datawhere each variable has a different unit (e.g., inches, meters, tons and kilograms), or where the scales of each of your variables are very different from one another (e.g., 0-1 vs 0-1000). The reason this importance is particularly high in cluster analysis is becausegroups are defined based on the distance between points in mathematicalspace.

When you are working with data where each variable means something different, (e.g., age and weight) the fields are not directly comparable. One year is not equivalentto one pound, and may or may not have the same level of importance in sorting a group of records. In a situation where one field has a much greater range of value than another (because the field with the wider range of values likely has greater distances between values), it may end up being the primary driver of what defines clusters. Standardization helps to make the relative weight of each variable equal by converting each variable to a unitless measure or relative distance.

What follows is a couple examplesdemonstrating how standardization may impact a clustering solution, using the 2015 US Census Demographic Dataset, downloaded from Kaggle. This dataset includes different demographic variablesfor counties in the United States, including population, race, income, poverty, commute distance, commute method, as well as variables describing employment.

In our first example,we are interested in performing cluster analysis on Total Population and Mean Commute Time. We would like to use these two variables to split all of the counties into two groups. The units (number of people vs. minutes) and therange of values (85- 10038388 people vs. 5- 45 minutes) of these attributes are very different. It is also worth noting that Total Population is a sum, and Mean Commute Time is an average.

When we create clusters with the raw data, we see that Total Population is the primary driver of dividing these two groups. There is an apparent population threshold used to divide the data into two clusters:

TotalPopulationMeanCommuteRaw.png

However, after standardization, both Total Population andMean Commute seem to have an influence on how the clusters are defined.

TotalPopulationMeanCommuteStd.png

In this next example, we are interested in clustering on Median Income and Percent of the Population that is Native American (by county). Median Income is measured in dollars and represents the "middle" income for a household in a given county, and Native American is a percentage of the total population for that county. Again, the units and ranges of these variables are very different from one another.

When we perform cluster analysis with these two variables without first standardizing, we see that the clusters are primarily split on Income. Income, being measured in dollars, has greater separation in points than percentages, therefore it is the dominant variable.

IncomePctNative.png

When we standardize the data prior to performing cluster analysis, the clusters change. We find that with more equal scales, the Percent Native American variable more significantly contributes to defining the clusters.

IncomePctNativeSTD.png

Standardization prevents variables with larger scales from dominating how clusters are defined. It allows all variables tobe considered by the algorithm with equal importance.

There are a few different options for standardization, but two of the most frequently used are z-score and unit interval:

  1. Z-scoretransforms data by subtracting the mean value for each field from the values of the file and then dividing by the standard deviation of the field, resulting in data with a mean of zero and a standard deviation of one.
  2. Unit intervalis calculated by subtractingthe minimum value of the field and then dividing by the range of the field (maximum minus minimum) which results in a field with values ranging from 0 to 1.

Although standardization is considered best practice for cluster analysis, there arecircumstances where standardization may not be appropriate for your data (e.g., Latitude and Longitude).If you’d like to read more there are a few great discussions on this topic on theStatisticsand Data Scienceforums of Stack Exchange, as well as this academic article on Standardization and Its Effects on K-Means Clustering AlgorithmbyIsmail Bin Mohamad and Dauda Usman.

As always, the golden rule is to know thy data. Only you will know if standardization is right for your use case.

Comments
YingLi
Alteryx
Alteryx

Content itself is helpful but lots of words run together (spacing issue).  Maybe some spaces got lost when publishing this content, e.g. Unit intervalis calculated. 

lepome
Alteryx Alumni (Retired)

@YingLi 

Thank you for mentioning that.  It's an intermittent problem.  I'm trying to find all of the affected articles so that we can troubleshoot and fix them.