Multivariate normal distribution


 * MVN redirects here. For the airport with that IATA code in Mount Vernon, Kentucky, see Mount Vernon Airport.

In probability theory and statistics, a multivariate normal distribution, also sometimes called a multivariate Gaussian distribution, is a specific probability distribution, which can be thought of as a generalization to higher dimensions of the one-dimensional normal distribution (also called a Gaussian distribution). It is also closely related to matrix normal distribution.

General case
A random vector $$\ X = [X_1, \dots, X_N]^T$$ follows a multivariate normal distribution if it satisfies the following equivalent conditions:


 * every linear combination $$\ Y = a_1 X_1 + \cdots + a_N X_N$$ is normally distributed


 * there is a random vector $$\ Z = [Z_1, \dots, Z_M]^T$$, whose components are independent standard normal random variables, a vector $$\ \mu = [\mu_1, \dots, \mu_N]^T$$ and an $$N \times M$$ matrix $$\ A$$ such that $$\ X = A Z + \mu$$.


 * there is a vector $$\mu$$ and a symmetric, positive semi-definite matrix $$\ \Sigma$$ such that the characteristic function of X is



\phi_X\left(u;\mu,\Sigma\right) = \exp \left( i \mu^\top u - \frac{1}{2} u^\top \Sigma u \right). $$

If $$\ \Sigma$$ is non-singular, then the distribution may be described by the following PDF:



f_X(x_1, \dots, x_N) = \frac {1} {(2\pi)^{N/2}|\Sigma|^{1/2}} \exp \left( -\frac{1}{2} ( x - \mu)^\top \Sigma^{-1} (x - \mu) \right) $$

where $$\ \left| \Sigma \right|$$ is the determinant of $$\ \Sigma$$. Note how the equation above reduces to that of the univariate normal distribution if $$\ \Sigma$$ is a scalar (i.e., a multiple of the identity matrix).

The vector μ in these conditions is the expected value of X and the matrix $$\ \Sigma = A A^T$$ is the covariance matrix of the components Xi.

It is important to realize that the covariance matrix must be allowed to be singular (thus not described by above formula for which $$\ \Sigma^{-1}$$ is defined). That case arises frequently in statistics; for example, in the distribution of the vector of residuals in ordinary linear regression problems. Note also that the Xi are in general not independent; they can be seen as the result of applying the linear transformation A to a collection of independent Gaussian variables Z.

That the distribution of a random vector X is a multivariate normal distribution can be written in the following notation:


 * $$X\ \sim \mathcal{N}(\mu, \Sigma),$$

or to make it explicitly known that X is N-dimensional,


 * $$X\ \sim \mathcal{N}_N(\mu, \Sigma).$$

Cumulative distribution function
The cumulative distribution function (cdf) $$F(x)$$ is defined as the probability that all values in a random vector $$X$$ are less than or equal to the corresponding values in vector $$x$$. Though there is no closed form for $$F(x)$$, there are a number of algorithms that estimate it numerically. For example, see MVNDST under (includes FORTRAN code) or  (includes MATLAB code).

A counterexample
The fact that two or more random variables X and Y are normally distributed does not imply that the pair (X, Y) has a joint normal distribution. A simple example is one in which Y = X if |X| > 1 and Y = &minus;X if |X| < 1.

Also see normally distributed and uncorrelated does not imply independent.

Normally distributed and independent
If X and Y are normally distributed and independent, then they are "jointly normally distributed", i.e., the pair (X, Y) has a bivariate normal distribution. There are of course also many bivariate normal distributions in which the components are correlated.

Bivariate case
In the 2-dimensional nonsingular case, the probability density function (with mean (0,0)) is



f(x,y) = \frac{1}{2 \pi \sigma_x \sigma_y \sqrt{1-\rho^2}} \exp \left( -\frac{1}{2 (1-\rho^2)} \left( \frac{x^2}{\sigma_x^2} + \frac{y^2}{\sigma_y^2} - \frac{2 \rho x y}{ (\sigma_x \sigma_y)} \right) \right) $$

where $$\rho$$ is the correlation between $$X$$ and $$Y$$. In this case,



\Sigma = \begin{bmatrix} \sigma_x^2             & \rho \sigma_x \sigma_y \\ \rho \sigma_x \sigma_y & \sigma_y^2 \end{bmatrix}. $$

Linear transformation
If $$Y = B X \,$$ is a linear transformation of $$X\ \sim \mathcal{N}(\mu, \Sigma),$$ where $$B\,$$ is an $$M \times N$$ matrix then $$Y\,$$ has a multivariate normal distribution with expected value $$B \mu \,$$ and variance $$B \Sigma B^T \,$$ (i.e., $$Y \sim \mathcal{N} \left(B \mu, B \Sigma B^T\right)$$).

Corollary: any subset of the $$X_i\,$$ has a marginal distribution that is also multivariate normal. To see this consider the following example: to extract the subset $$(X_1, X_2, X_4)^T \,$$, use



B = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & \ldots & 0 \\ 0 & 1 & 0 & 0 & 0 & \ldots & 0 \\ 0 & 0 & 0 & 1 & 0 & \ldots & 0 \end{bmatrix} $$

which extracts the desired elements directly.

Geometric interpretation
The equidensity contours of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of hyperspheres) centered at the mean. The directions of the principal axes of the ellipsoids are given by the eigenvectors of the covariance matrix $$\Sigma$$. The squared relative lengths of the principal axes are given by the corresponding eigenvalues.

If $$\Sigma=U\Lambda U^T=U\Lambda^{1/2}(U\Lambda^{1/2})^T$$ is an eigendecomposition where the columns of U are unit eigenvectors and $$\Lambda$$ is a diagonal matrix of the eigenvalues, then we have


 * $$X\ \sim N(\mu, \Sigma) \iff X\ \sim \mu+U\Lambda^{1/2}N(0, I) \iff X\ \sim \mu+UN(0, \Lambda).$$

Moreover, U can be chosen to be a rotation matrix, as inverting an axis does not have any effect on $$N(0, \Lambda)$$, but inverting a column changes the sign of U's determinant. The distribution $$N(\mu, \Sigma)$$ is in effect $$N(0, I)$$ scaled by $$\Lambda^{1/2}$$, rotated by U and translated by $$\mu$$.

Conversely, any choice of $$\mu$$, full rank matrix U, and positive diagonal entries $$\Lambda_i$$ yields a non-singular multivariate normal distribution. If any $$\Lambda_i$$ is zero and U is square, the resulting covariance matrix $$U\Lambda U^T$$ is singular. Geometrically this means that every contour ellipsoid is infinitely thin and has zero volume in n-dimensional space, as at least one of the principal axes has length of zero.

Correlations and independence
In general, random variables may be uncorrelated but highly dependent. But if a random vector has a multivariate normal distribution then any two or more of its components that are uncorrelated are independent. This implies that any two or more of its components that are pairwise independent are independent.

But it is not true that two random variables that are (separately, marginally) normally distributed and uncorrelated are independent. Two random variables that are normally distributed may fail to be jointly normally distributed, i.e., the vector whose components they are may fail to have a multivariate normal distribution. For an example of two normally distributed random variables that are uncorrelated but not independent, see normally distributed and uncorrelated does not imply independent.

Higher moments
The kth-order moments of X are defined by



\mu _{1,\dots,N}(X)\ \stackrel{\mathrm{def}}{=}\ \mu _{r_{1},\dots,r_{N}}(X)\ \stackrel{\mathrm{def}}{=}\  E\left[ \prod\limits_{j=1}^{N}X_j^{r_{j}}\right] $$

where $$r_{1}+r_{2}+\cdots+r_{N}=k.$$

The central $$k$$-order moments are given as follows

(a) If $$k$$ is odd, $$\mu _{1,\dots,N}(X-\mu )=0$$.

(b) If $$k$$ is even with $$k=2\lambda$$, then

\mu _{1,\dots,2\lambda }(X-\mu )=\sum \left( \sigma _{ij}\sigma _{kl}\cdots\sigma _{XZ}\right) $$ where the sum is taken over all allocations of the set $$\left\{ 1,\dots,2\lambda \right\}$$ into $$\lambda$$ (unordered) pairs, giving $$(2\lambda -1)!/(2^{\lambda -1}(\lambda -1)!)$$ terms in the sum, each being the product of $$\lambda$$ covariances. The covariances are determined by replacing the terms of the list $$\left[ 1,\dots,2\lambda \right]$$ by the corresponding terms of the list consisting of $$r_1$$ ones, then $$r_2$$ twos, etc, after each of the possible allocations of the former list into pairs.

In particular, the 4-order moments are
 * $$E\left[ X_{i}^{4}\right] = 3( \sigma _{ii}) ^{2}$$
 * $$E\left[ X_{i}^{3}X_{j}\right] = 3\sigma _{ii}\sigma _{ij}$$
 * $$E\left[ X_{i}^{2}X_{j}^{2}\right] = \sigma _{ii}\sigma _{jj}+2\left( \sigma _{ij}\right) ^{2}$$
 * $$E\left[ X_{i}^{2}X_{j}X_{k}\right] = \sigma _{ii}\sigma _{jk}+2\sigma _{ij}\sigma _{ik}$$
 * $$E\left[ X_{i}X_{j}X_{k}X_{n}\right] = \sigma _{ij}\sigma _{kn}+\sigma _{ik}\sigma _{jn}+\sigma _{in}\sigma _{jk}.

$$

For fourth order moments (four variables) there are three terms. For sixth-order moments there are 3 &times; 5 = 15 terms, and for eighth-order moments there are 3 &times; 5 &times; 7 = 105 terms. The sixth-order moment case can be expanded as


 * $$\begin{align}

& {} E[X_{1}X_{2}X_{3}X_{4}X_{5}X_{6}] \\ &{} = E[X_{1}X_{2}]E[X_{3}X_{4}]E[X_{5}X_{6}]+E[X_{1}X_{2}]E[X_{3}X_{5}]E[X_{4}X_{6}]+E[X_{1}X_{2}]E[X_{3}X_{6}]E[X_{4}X_{5}] \\ &{} + E[X_{1}X_{3}]E[X_{2}X_{4}]E[X_{5}X_{6}]+E[X_{1}X_{3}]E[X_{2}X_{5}]E[X_{4}X_{6}]+E[X_{1}X_{3}]E[X_{2}X_{6}]E[X_{4}X_{5}] \\ &+ E[X_{1}X_{4}]E[X_{2}X_{3}]E[X_{5}X_{6}]+E[X_{1}X_{4}]E[X_{2}X_{5}]E[X_{3}X_{6}]+E[X_{1}X_{4}]E[X_{2}X_{6}]E[X_{3}X_{5}] \\ & + E[X_{1}X_{5}]E[X_{2}X_{3}]E[X_{4}X_{6}]+E[X_{1}X_{5}]E[X_{2}X_{4}]E[X_{3}X_{6}]+E[X_{1}X_{5}]E[X_{2}X_{6}]E[X_{3}X_{4}] \\ &+E[X_{1}X_{6}]E[X_{2}X_{3}]E[X_{4}X_{5}]+E[X_{1}X_{6}]E[X_{2}X_{4}]E[X_{3}X_{5}]+E[X_{1}X_{6}]E[X_{2}X_{5}]E[X_{3}X_{4}]. \end{align}$$

Conditional distributions
If $$\mu$$ and $$\Sigma$$ are partitioned as follows



\mu = \begin{bmatrix} \mu_1 \\ \mu_2 \end{bmatrix} \quad$$ with sizes $$\begin{bmatrix} q \times 1 \\ (N-q) \times 1 \end{bmatrix}$$



\Sigma = \begin{bmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22} \end{bmatrix} \quad$$ with sizes $$\begin{bmatrix} q \times q & q \times (N-q) \\ (N-q) \times q & (N-q) \times (N-q) \end{bmatrix}$$

then the distribution of $$x_1$$ conditional on $$x_2=a$$ is multivariate normal $$(X_1|X_2=a) \sim N(\bar{\mu}, \overline{\Sigma})$$ where



\bar{\mu} = \mu_1 + \Sigma_{12} \Sigma_{22}^{-1} \left( a - \mu_2 \right) $$

and covariance matrix



\overline{\Sigma} = \Sigma_{11} - \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21}. $$

This matrix is the Schur complement of $${\mathbf\Sigma_{22}}$$ in $${\mathbf\Sigma}$$.

Note that knowing the value of $$x_2$$ to be $$a$$ alters the variance; perhaps more surprisingly, the mean is shifted by $$\Sigma_{12} \Sigma_{22}^{-1} \left(a - \mu_2 \right)$$; compare this with the situation of not knowing the value of $$a$$, in which case $$x_1$$ would have distribution $$N_q \left(\mu_1, \Sigma_{11} \right)$$.

The matrix $$\Sigma_{12} \Sigma_{22}^{-1}$$ is known as the matrix of regression coefficients.

Fisher information matrix
The Fisher information matrix (FIM) for a normal distribution takes a special formulation. The $$(m,n)$$ element of the FIM for $$X \sim N(\mu(\theta), \Sigma(\theta))$$ is



\mathcal{I}_{m,n} = \frac{\partial \mu}{\partial \theta_m} \Sigma^{-1} \frac{\partial \mu^\top}{\partial \theta_n} + \frac{1}{2} \mathrm{tr} \left( \Sigma^{-1} \frac{\partial \Sigma}{\partial \theta_m} \Sigma^{-1} \frac{\partial \Sigma}{\partial \theta_n} \right) $$

where \frac{\partial \mu}{\partial \theta_m} = \begin{bmatrix} \frac{\partial \mu_1}{\partial \theta_m} & \frac{\partial \mu_2}{\partial \theta_m} & \cdots & \frac{\partial \mu_N}{\partial \theta_m} & \end{bmatrix} $$ \frac{\partial \mu^\top}{\partial \theta_m} = \left( \frac{\partial \mu}{\partial \theta_m} \right)^\top = \begin{bmatrix} \frac{\partial \mu_1}{\partial \theta_m} \\ \\ \frac{\partial \mu_2}{\partial \theta_m} \\ \\ \vdots \\ \\ \frac{\partial \mu_N}{\partial \theta_m} \\ \\ \end{bmatrix} $$ \frac{\partial \Sigma}{\partial \theta_m} = \begin{bmatrix} \frac{\partial \Sigma_{1,1}}{\partial \theta_m} & \frac{\partial \Sigma_{1,2}}{\partial \theta_m} & \cdots & \frac{\partial \Sigma_{1,N}}{\partial \theta_m} \\ \\ \frac{\partial \Sigma_{2,1}}{\partial \theta_m} & \frac{\partial \Sigma_{2,2}}{\partial \theta_m} & \cdots & \frac{\partial \Sigma_{2,N}}{\partial \theta_m} \\ \\ \vdots & \vdots & \ddots & \vdots \\ \\ \frac{\partial \Sigma_{N,1}}{\partial \theta_m} & \frac{\partial \Sigma_{N,2}}{\partial \theta_m} & \cdots & \frac{\partial \Sigma_{N,N}}{\partial \theta_m} \end{bmatrix} $$
 * $$\mathrm{tr}$$ is the trace function

Kullback-Leibler divergence
The Kullback-Leibler divergence from $$N0_N(\mu_0, \Sigma_0)$$ to $$N1_N(\mu_1, \Sigma_1)$$ is:



D_\text{KL}(N0 \| N1) = { 1 \over 2 } \left( \log_e \left( { \det \Sigma_1 \over \det \Sigma_0 } \right) + \mathrm{tr} \left( \Sigma_1^{-1} \Sigma_0 \right) + \left( \mu_1 - \mu_0\right)^\top \Sigma_1^{-1} ( \mu_1 - \mu_0 ) - N \right). $$

Estimation of parameters
The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is perhaps surprisingly subtle and elegant. See estimation of covariance matrices.

In short, the probability density function (pdf) of an N-dimensional multivariate normal is


 * $$f(x)=(2 \pi)^{-N/2} \det(\Sigma)^{-1/2} \exp\left(-{1 \over 2} (x-\mu)^T \Sigma^{-1} (x-\mu)\right)$$

and the ML estimator of the covariance matrix is


 * $$\widehat\Sigma = {1 \over n}\sum_{i=1}^n (X_i-\overline{X})(X_i-\overline{X})^T$$

which is simply the sample covariance matrix for sample size n. This is a biased estimator whose expectation is


 * $$E[\widehat\Sigma] = {n-1 \over n}\Sigma.$$

An unbiased sample covariance is


 * $$\widehat\Sigma = {1 \over n-1}\sum_{i=1}^n (X_i-\overline{X})(X_i-\overline{X})^T.$$

Entropy
The differential entropy of the multivariate normal distribution is


 * $$h\left(f\right)= -\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\cdots\int_{-\infty}^{\infty}f(x)\ln f(x)\,dx$$


 * $$=\frac12 \left(N+N\ln\left(2\pi\right)+\ln\left| \Sigma \right|\right)\!$$


 * $$=\frac{1}{2}\ln\{(2\pi e)^N \left| \Sigma \right|\}$$

where $$\left| \Sigma \right|$$ is the determinant of the covariance matrix $$\Sigma$$.

Multivariate normality tests
Multivariate normality tests check a given set of data for similarity to the multivariate normal distribution. The null hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small p-value indicates non-normal data. Multivariate normality tests include the Cox-Small test and Smith and Jain's adaptation of the Friedman-Rafsky test.

Drawing values from the distribution
A widely used method for drawing a random vector $$X$$ from the $$N$$-dimensional multivariate normal distribution with mean vector $$\mu$$ and covariance matrix $$\Sigma$$ (required to be symmetric and positive-definite) works as follows:


 * 1) Compute the Cholesky decomposition (matrix square root) of $$\Sigma$$, that is, find the unique lower triangular matrix $$A$$ such that $$A\,A^T = \Sigma$$.
 * 2) Let $$Z=(z_1,\dots,z_N)^T$$ be a vector whose components are $$N$$ independent standard normal variates (which can be generated, for example, by using the Box-Muller transform).
 * 3) Let $$X$$ be $$\mu + A\,Z$$.