Autocovariance

Overview
In statistics, given a stochastic process X(t), the autocovariance is simply the covariance of the signal against a time-shifted version of itself. If each state of the series has a mean, E[Xt] = &mu;t, then the autocovariance is given by


 * $$\, K_\mathrm{XX} (t,s) = E[(X_t - \mu_t)(X_s - \mu_s)] = E[X_t\cdot X_s]-\mu_t\cdot\mu_s.\,$$

where E is the expectation operator.

Stationarity
If X(t) is wide sense stationary then the following conditions are true:


 * $$\mu_t = \mu_s = \mu \,$$ for all t, s

and


 * $$K_\mathrm{XX}(t,s) = K_\mathrm{XX}(s-t) = K_\mathrm{XX}(\tau) \, $$

where


 * $$\tau = s - t \,$$

is the lag time, or the amount of time by which the signal has been shifted.

As a result, the autocovariance becomes


 * $$\, K_\mathrm{XX} (\tau) = E \{ (X(t) - \mu)(X(t+\tau) - \mu) \}   $$


 * $$ = E \{ X(t)\cdot X(t+\tau) \} -\mu^2,\,$$


 * $$ = R_\mathrm{XX}(\tau) - \mu^2,\,$$

where RXX represents the autocorrelation.

Normalization
When normalised by dividing by the variance &sigma;2 then the autocovariance becomes the autocorrelation coefficient &rho;. That is


 * $$ \rho_\mathrm{XX}(\tau) = \frac{ K_\mathrm{XX}(\tau)}{\sigma^2}.\,$$

Note, however, that some disciplines use the terms autocovariance and autocorrelation interchangeably.

The autocovariance can be thought of as a measure of how similar a signal is to a time-shifted version of itself with an autocovariance of &sigma;2 indicating perfect correlation at that lag. The normalisation with the variance will put this into the range [&minus;1, 1].