In 11th grade I took my high school’s statistics class. When we learned about linear regression I raised my hand and asked why we were minimizing the sum of the squares of the residuals rather than, say, the sum of the absolute values. If my memory serves right, my teacher said that minimizing the sum of the absolute values would also be reasonable, but that absolute values are annoying to deal with so we square the residuals instead.
This has vaguely bothered me ever since: it seemed like linear regression — a tool applied extensively throughout the sciences — was based on an arbitrary choice. But in a conversation with my friend Mike a few months ago I learned that the choice is far from arbitrary.1
Recall the normal distribution. It is probably the most well-known probability distribution, and for good reason: it appears everywhere in the real world, from the heights of adult human males to annual rainfall totals. If you don’t know anything about how a quantity is distributed, the normal distribution is a reasonable guess.
The choice of minimizing the sum of the squares of residuals follows from the assumption that residuals off of a line of best fit are normally distributed. Specifically, they are assumed to be independent and normally distributed with the same standard deviation regardless of the value of the predictor.2
To see why this assumption makes us want to minimize the sum of the squares of residuals, suppose we have some dataset of predictors (x-values) and responders (y-values) and we want to find the “line of best fit.” The line of best fit is the line that best models the data, in the sense that the residuals of our data off of the line surprise us as little as possible.
We ask: under our model of how residuals are distributed, what is the probability of seeing a particular residual ? This is not a well-defined question: the probability that the residual is exactly
is zero. But we can say that for really small values of
, the probability that the residual lies in an interval of length
around
is roughly equal to
times the value of the normal distribution’s PDF at
— that is,
.
(From now on we’ll ignore the and think of the remaining expression as the “instantaneous likeliness” of seeing the residual value
. If this bothers you, feel free to do the calculation with epsilons included.)
This means that minimizing how surprised we are about our residuals amounts to choosing coefficients for our line that maximize the product over all data points of the expression above.3 That is, if we call the residuals , we want to maximize
.
This is equivalent to minimizing , i.e. the sum of the squares of the residuals. (Interestingly, this is the quantity that you want to minimize regardless of the particular value of
.) There you have it — the theoretical justification for least-squares regression.
***
Often, though, you might have a reason to believe that your residuals are not normally distributed, or that the standard deviation of the residuals does depend on the value of the predictor. If you followed the math I did above, you should be able to figure out what quantity you want to minimize instead of the sum of the squares of the residuals, whatever your model for how residuals are distributed! I’ve included a few examples, with answers in the footnotes.
Example 1: Suppose you model your residuals as being distributed as (the
is there so the distribution sums to
). What function of the residuals do you want to minimize?4
Example 2: Suppose instead your residuals are distributed as (again, the
is just a normalizing factor). What do you want to minimize?5
Example 3: Suppose your residuals are normally distributed, but the standard deviation of the residual depends on the value of , the predictor. Specifically, assume that the standard deviation is
for some
. (I think this is pretty natural in some contexts, one of which I’ll talk about in the next post.) What do you want to minimize?6
Want to see these techniques applied in practice? Check out my post on the predictive power of general election polls, where I use a residual model like the one in Example 3!
1. I think it’s not unlikely that my teacher knew a better answer but decided not to derail the class to have this discussion.↩
2. You may recall these as the assumptions you need to make when doing a test for the significance of the slope of a line of best fit, if you’ve taken AP statistics.↩
3. This is where we use the assumption that the residuals are independent; otherwise we couldn’t represent the probability of seeing all residuals as the product of the probabilities of seeing each residual.↩
4. You want to maximize , which amounts to minimizing
— precisely what I suggested as an alternative to minimizing
in the statistics class!↩
5. You want to maximize or equivalently, minimize
.↩
6. You want to maximize , which amounts to minimizing
. This is the same as doing a weighted least-squares regression, where the weight of the point
is
.↩
Another interesting (but unrelated, I think) on the expression
for a data set
is the following.
Suppose that we are given our data points
and we want to pick the value of x that minimizes
. A dash of calculus tells us that the correct value of x to pick is the average of the data.
What would happen if we instead tried to find x to minimize
? (Take a moment to think about it – it’s a good problem!)
…
The minimizing value of x is now the *median* of the data! (Or if the median is midway between two data points, any value in between them.)
I’m not entirely certain what the correct interpretation of this is, but it at least convinces me that if you think the average is a reasonable statistic to look at, then the variance, i.e.
where
is the average, is reasonable too.
LikeLiked by 1 person
That’s a cool connection; thanks for the comment! More broadly, I guess there’s a mapping from minimization criteria to summary statistics. There might be more interesting stuff to say about this.
(I edited your LaTeX formulas to make them render; I hope you don’t mind. For future reference, you need a space between they keyword “latex” and the next thing you type, even if it’s a backslash.)
LikeLike
Thanks for editing the formulas – that’s useful to know.
For completely unrelated reasons, I just stumbled into the fact that if you instead try to minimize
where
is 0 when
and 1 when
then the minimizing value of x is the mode.
LikeLike
Whoa, I *also* stumbled on that yesterday for a completely unrelated reason! Is this http://shlegeris.com/2014/12/26/mean-median-mode — where you saw it by any chance? (I met the person who writes that blog at the New York SSC meetup.)
LikeLike
Haha, yep precisely. I met him at the SSC meetup in Boston and stumbled onto his blog afterwards. This chain of coincidences is a little uncanny…
LikeLike
Yep, likewise except New York and not Boston 🙂
LikeLike