Why 1.5 Is Used in the IQR Rule for Outlier Detection

The 1.5 IQR rule determines that any data point that’s 1.5 IQR points below the first quartile of data or above the third quartile is an outlier. Here’s why it’s 1.5 times IQR.  

Written by Shivam Chaudhary
Published on Jan. 24, 2024
Why 1.5 Is Used in the IQR Rule for Outlier Detection
Image: Shutterstock / Built In
Brand Studio Logo

While working with a student of mine on an assignment applying the interquartile range (IQR) method of outlier detection, he asked me an intriguing question: 

“Why 1.5 times IQR? Why not one or two or any other number?”

The IQR method of outlier detection is a method that dictates that any data point in a boxplot that’s more than 1.5 IQR points below the first quartile data or more than 1.5 IQR points above the third quartile data is considered an outlier. Whether you’re familiar with the rule or not, I hope the question makes you think about it. After all, isn’t that what good data scientists do? Question everything, believe nothing.

Why Do You Multiply 1.5 in IQR Outlier Detection? Explained

The interquartile (IQR) method of outlier detection uses 1.5 as its scale to detect outliers because it most closely follows Gaussian distribution. As a result, the method dictates that any data point that’s 1.5 points below the lower bound quartile or above the upper bound quartile is an outlier. 

In the most general sense, an outlier is a data point which differs significantly from other observations. Now, its meaning can be interpreted according to the statistical model under study, but for the sake of simplicity and not to divert too far from the main purpose of this post, we’d consider first order statistics on a very simple data set, without any loss of generality.

 

What Is the IQR Method of Outlier Detection?

To explain IQR method easily, let’s start with a boxplot.

A boxplot tells us, more or less, about the distribution of the data. It gives a sense of how much the data is actually spread out, what its range is and its skewness. A boxplot enables us to draw inferences from it for ordered data. It tells us about the various metrics of  data arranged in ascending order.

In this box plot, it labels the minimum on one side, maximum on the other and a median in the middle. That would mean:

  • Minimum is the minimum value in the data set.
  • Maximum is the maximum value in the data set.

So, the difference between the two tells us about the range of a data set.

  • The median is the median (or center point), also called second quartile, of the data resulting from the fact that the data is ordered.
  • Q1 is the first quartile of the data, which is to say 25 percent of the data lies between minimum and Q1.
  • Q3 is the third quartile of the data, which is to say 75 percent of the data lies between minimum and Q3.

The difference between Q3 and Q1 is called the interquartile range or IQR.

IQR = Q3 - Q1

To detect the outliers using this method, we define a new range, let’s call it the decision range. Any data point lying outside this range is considered an outlier and is accordingly dealt with. The range is as given below:

Lower Bound: (Q1 - 1.5 * IQR)
Upper Bound: (Q3 + 1.5 * IQR)

Any data point less than the “Lower Bound” or more than the “Upper Bound” is considered an outlier.

More on Data ScienceImportance Sampling Explained

 

How the 1.5 IQR Rule Works

But the question was: Why only 1.5 times the IQR? Why not any other number?

Well, as you might have guessed, the number 1.5, hereafter scale, clearly controls the sensitivity of the range and hence the decision rule. A bigger scale would make the outlier(s) be considered as data point(s), while a smaller one would make some of the data point(s) be perceived as outlier(s). And we’re quite sure, none of these cases is desirable.

But this is an abstract way of explaining the reason, it’s quite effective, but naive nonetheless. So to what should we turn our heads for hope?

Math, of course! 

Things are gonna get a bit “math-y,” but I’ll try to keep it minimal.

You might be surprised if I tell you that this number, or scale, depends on the distribution followed by the data.

For example, let’s say our data follows Gaussian distribution.

You all must have seen how a Gaussian distribution looks like, right? If not, this example shows a bell curve following normal distribution.

There are certain observations which could be inferred from the figure in the link above:

  • About 68.26 percent of the data set lies within one standard deviation (<σ) of the mean (μ). The pink region takes both sides into account.
  • About 95.44 percent of the data set lies within two standard deviations (2σ) of the mean (μ), taking both sides into account, the pink and blue region in the figure.
  • About 99.72 percent of the data set lies within three standard deviations (<3σ) of the mean (μ), taking both sides into account, the pink, blue and green region in the figure.
  • And the rest 0.28 percent of the data lies outside three standard deviations (>3σ) of the mean (μ), taking both sides into account, the little red region in the figure. And this part of the data is considered as outliers.
  • The first and the third quartiles, Q1 and Q3, lies at -0.675σ and +0.675σ from the mean, respectively.

There are calculations behind these inferences but that’s beyond the scope of this article. 

A tutorial on how to determine an outlier with the IQR rule. | Video: Nicole Hamilton

More on Data ScienceQ-Q Plots Explained

 

Why Is 1.5 Used in the IQR Rule? 

To understand why 1.5 is the scale, let’s take a look at how different scales at one and two impact outlier detection.

 

Scale 1 in IQR Outlier Detection

Taking scale = 1:

Lower Bound:
= Q1 - 1 * IQR
= Q1 - 1 * (Q3 - Q1)
= -0.675σ - 1 * (0.675 - [-0.675])σ
= -0.675σ - 1 * 1.35σ
= -2.025σ
Upper Bound:
= Q3 + 1 * IQR
= Q3 + 1 * (Q3 - Q1)
= 0.675σ + 1 * (0.675 - [-0.675])σ
= 0.675σ + 1 * 1.35σ
= 2.025σ

So, when scale is taken as 1, then according to the IQR method, any data which lies beyond 2.025σ from the mean (μ), on either side, shall be considered as outlier. But as we know, the data is useful up to 3σ on either side of the μ. So we can’t take scale = 1, because this makes the decision range too exclusive resulting in too many outliers. In other words, the decision range gets so small (compared to ) that it considers some data points as outliers, which is not desirable.

 

Scale 2 in IQR Outlier Detection

Taking scale = 2:

Lower Bound:
= Q1 - 2 * IQR
= Q1 - 2 * (Q3 - Q1)
= -0.675σ - 2 * (0.675 - [-0.675])σ
= -0.675σ - 2 * 1.35σ
= -3.375σ
Upper Bound:
= Q3 + 2 * IQR
= Q3 + 2 * (Q3 - Q1)
= 0.675σ + 2 * (0.675 - [-0.675])σ
= 0.675σ + 2 * 1.35σ
= 3.375σ

When scale is taken as 2, then according to the IQR method, any data that lies beyond 3.375σ from the mean (μ), on either side, shall be considered an outlier. As we know, the data is useful up to on either side of the μ. So, we cannot take scale = 2, because this makes the decision range too inclusive, meaning it results in too few outliers. In other words, the decision range gets so big (compared to ) that it considers some outliers as data points, which is not desirable either.

 

Scale 1.5 in IQR Outlier Detection

Taking scale = 1.5:

Lower Bound:
= Q1 - 1.5 * IQR
= Q1 - 1.5 * (Q3 - Q1)
= -0.675σ - 1.5 * (0.675 - [-0.675])σ
= -0.675σ - 1.5 * 1.35σ
= -2.7σ
Upper Bound:
= Q3 + 1.5 * IQR
= Q3 + 1.5 * (Q3 - Q1)
= 0.675σ + 1.5 * (0.675 - [-0.675])σ
= 0.675σ + 1.5 * 1.35σ
= 2.7σ

When scale is taken as 1.5, then according to the IQR method any data that lies beyond 2.7σ from the mean (μ), on either side, shall be considered an outlier. This decision range is the closest to what Gaussian Distribution tells us, i.e., . In other words, this makes the decision rule closest to what Gaussian distribution considers for outlier detection, and this is exactly what we wanted.

To get exactly , we’d need to take the scale = 1.7. However, 1.5 is more “symmetrical” than 1.7, and we’ve always been a little more inclined towards symmetry, haven’t we?

Also, IQR method of outlier detection is not the only nor best method for outlier detection. Some trade-off is acceptable.

See how beautifully and elegantly it all unfolded using math. I just love how things become clearer and take shape when perceived through mathematics. And this is one of the many reasons why math is the language of our world (not sure about the universe though).

Now you know why we take it 1.5 * IQR. But this scale depends on the distribution followed by the data. If my data seem to follow exponential distribution, then this scale would change.

But again, every complication that arises because of mathematics is solved by mathematics itself.

Ever heard of the central limit theorem? Yes, the very same theorem that grants us the liberty to assume the distribution to be followed as Gaussian without any guilt. With that, I hope you’ve found this article useful.

Hiring Now
Veritone
Artificial Intelligence • Computer Vision • HR Tech • Machine Learning • Software
SHARE