We use the bias-variance decomposition in machine learning as a backdrop to explore some probability-theoretic ways of thinking. We also make a comparison between the bias in Gauss-Markov and in bias-variance.
22 February 2025
All the computer scientists I've talked to have a very intuitive grasp of randomness, which comes through in the language they use. They'll say phrases like "sources of randomness" which reflects a level of comfort that I'd like to achieve. Let's explore this line of thinking.
Setup. Let's suppose the "true" relation between features x and labels y is given by the function f. The goal is to learn f, through restricted access via y. This will be further complicated by the noise in the system (due to measurement error, random fluctuations, white noise, etc). This leads us to model our relation between features and labels as a deterministic-random decomposition
y=f(x)+ϵ,
where ϵ is assumed to to have mean 0 and variance σ2. The mean 0 assumption is natural, and as far as I can tell, finite variance is a smallness assumption on the noise. Furthermore, we assume that x,ϵ are independent, because why wouldn't they be?
Let y^ be whatever predictor we built in our effort to learn f. This requires training data, denoted by D, and different training data will lead to different predictors y^. This accounts for the first source of randomness, namely in the variability in the training data. Furthermore, since the creation of y^ is via a deterministic process, all of the randomness in y^ is concentrated in the variability of D.
Statistics. In order to measure how badly our model y^ predicts labels y, we introduce a loss functionL(x)=L(y(x),y^(x)), where x is a fixed but arbitrary test point. Of course testing over individual points is too much information, so we average over points in x to define the test error statistic
ErrD:=Ex[L∣D].
This accounts for the second source of randomness, namely averaging over test data. ErrD forms a measurement, wrt a chosen L, of the performance of the estimator y^. We emphasize again that y^ is fixed since D is, and test points x are averaged over. This makes it a dead-end approach. To see why, assume that x is drawn from some distribution with a density p. Then, the test error is given by the formula
ErrD=∫x∈RnL(x)p(x)dx.
In order to compute ErrD, we would need complete knowledge of the density function p. Equivalently, this means we need to know the distribution that x was sampled from. However, since we can only draw finitely many samples (via a training set D), we can only estimate p.
To circumvent this problem, we instead compute the expected generalization error by averaging the test error over all training data D,
Err:=ED[ErrD]=Ex,D[L],
where we take a joint expectation over both the test points and the training data D used to build our model y^. The insight is to use the tower law / Fubini's theorem to flip the order of iterating expectations. This allows us to write the expected generalization error as
Err=Ex[ED[L∣x]],
where the inner expectation averages over training data D and the outer expectation averages over test points x.
Bias-variance. The goal of this section is to calculate the inner expectation Err(x):=ED[L∣x] when L is the L2 loss (how else would variance enter?). For notation, we denote the conditional expectation with subscripts
The last line requires some justification. Recall that x,ϵ are assumed to be independent. First, since f is deterministic in x, it follows that f,ϵ are independent. The second point is a bit more subtle. Recall that the sources of randomness in y^ are from variability in the training data D and from the test data x. When conditioned on x, the only source of randomness is from D. In particular, none of the randomness in y^ comes from any randomness in the noise in the test data ϵ. Therefore, ϵ,y^ are also independent. The last line follows. We continue
In the second line, we use the fact that x,ϵ are independent to get rid of the conditioning on x. In the last line, we use the fact that f is deterministic, so conditioning is just evaluation. Therefore,
Gauss-Markov. Gauss-Markov tells us that OLS is unbiased and variance minimizing among all linear unbiased estimators. At a glance, this seems to contradict most of the pictures drawn when discussing bias-variance. Usually, the picture suggests that the data follows a (e.g.) quadratic f, and linear models are said to underfit with high-bias-low-variance, while high degree polynomials are said to overfit with low-bias-high-variance. So why can Gauss-Markov claim that OLS, a linear function, is unbiased? Are we overloading the word bias?
First, realize that built into Gauss-Markov are stronger assumptions on the errors, but more importantly, the underlying assumption that the true function f is linear in the features. This restricts your search space to affine functions from the get-go. In other words, we assume that
y=Xβ+ϵ,
for some column vector β and design matrix X. This resolves the apparent contradiction that arose with the pictures. Now for a discussion of two senses of "bias". In classical OLS, we solve for the estimator β^ as
β^=(XTX)−1XTy,
which is calculated in this blog post. The claim that "OLS is unbiased" is classically understood in terms of parameter estimation, namely E[β^∣X]=β. Note that the design matrix (training data) is seen as a deterministic given. The proof of this comes from expanding out β^ as
In machine learning, however, we're more interested in predictions. So let's instead view OLS as a prediction model. In this case, the bias of OLS is now understood as
Bias(x)=f(x)−EX[y^∣x]=xTβ−EX[xTβ^∣x]
where we now take expectation over design matrices X, and (x,xTβ^) is a fixed test point. Note that the design matrix is now random. To show OLS is unbiased in this sense, it follows from the fact that EX[β^∣x]=β. Assuming this, we compute
where the last equality comes from the fact that ϵ is randomness from the test data, which is independent from the randomness that comes from the training data X. Compare equations (13) and (16). It's interesting to note that the two senses of unbiased basically follow the same proof, up until the last line.