Computing with (co)variance
In the confounder simulation, we simulated a variable with variance , and then made another variable depending on like this:
or in R:
A = 0.5 * B + rnorm( N, mean = 0, sd = something )
It turns out that, if has variance , and we want to have variance , then adding something with variance exactly is the right thing here. Why is that?
The calculation of this is not very hard - it depends on the properties of the variance. This page explains it.
Properties of the variance
It turns out that the variance has two key properties which make this type of calculation easy. The properties are:
- Variance property 1: The variance of a multiple of any variable scales like the multiple squared:
and
- Variance property 2: The variance of two independent things added, , is just the sum of their variances
These rules are just what we need to do the calculation above, since the first lets us figure out how much variance the contribution of contributes, and the second lets us work out how much more we need to add.
- Your solution
- Hint 1
- Hint 2
Use these two properties to work out on a piece of paper, how much the variance of the noise should be to make have variance again. (Or use the tabs to see some hints.)
According to the first property, the variance of is
and the variance of is , so this is just .
So what's the variance of the independent 'noise' you need to add on?
According to the first property, the variance of is
and the variance of is , so this is just .
According to the second property, we need to add something with variance exactly to make the total variance up to . (That is, something with standard deviation of .)
Properties of the covariance
That square-the-variable behaviour always seems a bit complicated to me. I actually find these rules easiest to remember in this way:
- The covariance between two variables is a linear function (behaves like a straight-line function!) of each of its variables.
In maths speak we say it is 'bilinear'. It is linear in the first term:
and it's also linear in the second term:
It's also symmetric:
Covariance is a measure of the co-linearity of two variables (around their mean). It gets bigger the larger the variables are, and bigger the more they tend to take the same values (after subtracting their mean). What's more, the variance of a variable is just the covariance of with itself:
The rules above boil down to applying the bi-linearity property to the variance, as in:
which is the first property, and
which is a more general form of the second property. (If and are independent, their covariance is zero, so the last term vanishes and this is the same as the one above in that case.)
The last formula lets us work out more complex scenarios. For example, we simulated a third variable as
...and we again wanted to have variance . How much variance do we need? The calculation is easy using formula (3):
In our computation and had variance , while we had
because of how was simulated. So this boils down to
In other words, we need to add noise with variance to make have variance .