Computing with (co)variance
In the confounder simulation, we simulated a variable with variance , and then made another variable depending on like this:
or in R:
A = 0.5 * B + rnorm( N, mean = 0, sd = something )
In our study we wanted to have variance as well, and the question was: what standard deviation should we put tehre to make this work?
It turns out that, if has variance , and we want to have variance , then adding something with variance exactly is the right thing here. Why is that?
The calculation of this is not very hard - it depends on the properties of the variance. This page explains it.
Properties of the variance
Variance turns out to have two key properties which make this type of calculation easy. The properties are:
- Variance property 1: The variance of a multiple of any variable scales like the multiple squared:
and
- Variance property 2: The variance of two independent things added, if and are independent, is just the sum of their variances
These rules are just what we need to do the calculation above: the first lets us figure out how much variance the contribution of contributes to , and the second rule lets us work out how much more variance we need to add.
Suppose we instead added only 0.25 of B to A:
A = 0.25 * B + rnorm( N, mean = 0, sd = something )
What 'something' do we need here?
- Your solution
- Hint 1
- Hint 2
Use the above properties to work this out on a piece of paper. (Or use the tabs to see some hints.)
According to the first property, the variance of is
and the variance of is , so this is just .
So what's the variance of the independent 'noise' you need to add on?
According to the first property, the variance of is
and the variance of is , so this is just .
According to the second property, we need to add something with variance exactly to make the total variance up to . (That is, something with standard deviation of .)
Properties of the covariance
That 'square-the-variable' behaviour always seems a bit complicated to me. I actually find these rules easiest to remember in this way:
- The covariance between two variables is a bilinear function.
That is - it ehaves like a linear (straight-line!) function of each of its two variables.
In other words, it is linear in the first term:
and it's also linear in the second term:
It's also symmetric:
Covariance is a measure of the co-linearity of two variables (around their mean). It gets bigger the larger the variables are, and bigger the more they tend to take the same values (after subtracting their mean). What's more, the variance of a variable is just the covariance of with itself:
The two rules for variance given above boil down to applying the bi-linearity property to the variance, as in:
which is the first property, and
which is a more general form of the second property. (If and are independent, their covariance is zero, so the last term vanishes and this is the same as the one above in that case.)
The last formula lets us work out more complex scenarios. For example, suppose again that
with variance , and suppose we then simulated a third variable as
...and we again wanted to have variance . How much variance do we need? The calculation is easy using formula (3):
In our computation and had variance , while we had
because of how was simulated. So this boils down to
In other words, we need to add noise with variance to make have variance .