Regression analysis

In statistics, regression analysis examines the relation of a dependent variable (response variable) to specified independent variables (explanatory variables). The mathematical model of their relationship is the regression equation. The dependent variable is modeled as a random variable because of uncertainty as to its value, given only the value of each independent variable. A regression equation contains estimates of one or more hypothesized regression parameters ("constants"). These estimates are constructed using data for the variables, such as from a sample. The estimates measure the relationship between the dependent variable and each of the independent variables. They also allow estimating the value of the dependent variable for a given value of each respective independent variable.

Uses of regression include curve fitting, prediction (including forecasting of time-series data), modeling of causal relationships, and testing scientific hypotheses about relationships between variables.

History of regression
The term "regression" was used in the nineteenth century to describe a biological phenomenon, namely that the progeny of exceptional individuals tend on average to be less exceptional than their parents, and more like their more distant ancestors. Francis Galton, a cousin of Charles Darwin, studied this phenomenon and applied the slightly misleading term "regression towards mediocrity" to it. For Galton, regression had only this biological meaning, but his work was later extended by Udny Yule and Karl Pearson to a more general statistical context.

Simple linear regression
The general form of a simple linear regression is
 * $$y_i=\alpha+\beta x_i +\varepsilon_i$$

where $$\alpha $$ is the intercept, $$\beta $$ is the slope, and $$\varepsilon$$ is the error term, which picks up the unpredictable part of the response variable yi. The error term is usually posited to be normally distributed. The $$x$$'s and $$y$$'s are the data quantities from the sample or population in question, and $$\alpha$$ and $$\beta$$ are the unknown parameters ("constants") to be estimated from the data. Estimates for the values of $$\alpha$$ and $$\beta$$ can be derived by the method of ordinary least squares. The method is called "least squares," because estimates of $$\alpha$$ and $$\beta$$ minimize the sum of squared error estimates for the given data set. The estimates of $$\alpha$$ and $$\beta$$ are often denoted by $$\widehat{\alpha}$$ and $$\widehat{\beta}$$ or their corresponding Roman letters. It can be shown (see Draper and Smith, 1998 for details) that least squares estimates are given by


 * $$\hat{\beta}=\frac{\sum(x_i-\bar{x})(y_i-\bar{y})}{\sum(x_i-\bar{x})^2}$$

and


 * $$\hat{\alpha}=\bar{y}-\hat{\beta}\bar{x}$$

where $$\bar{x}$$ is the mean (average) of the $$x$$ values and $$\bar{y}$$ is the mean of the $$y$$ values.

Generalizing simple linear regression
The simple model above can be generalized in different ways.


 * The number of predictors can be increased from one to several. See


 * The relationship between the knowns (the $$x$$s and $$y$$s) and the unknowns ($$\alpha$$ and the $$\beta$$s) can be nonlinear. See


 * The response variable may be non-continuous. For binary (zero or one) variables, there are the probit and logit model. The multivariate probit model makes it possible to estimate jointly the relationship between several binary dependent variables and some independent variables. For categorical variables with more than two values there is the multinomial logit. For ordinal variables with more than two values, there are the ordered logit and ordered probit models. An alternative to such procedures is linear regression based on polychoric or polyserial correlations between the categorical variables. Such procedures differ in the assumptions made about the distribution of the variables in the population. If the variable is positive with low values and represents the repetition of the occurrence of an event, count models like the Poisson regression or the negative binomial model may be used


 * The error term may be other than a normal distribution. See generalized linear model.


 * The form of the right hand side can be determined from the data. See Nonparametric regression. These approaches require a large number of observations, as the data are used to build the model structure as well as estimate the model parameters. They are usually computationally intensive.

Regression diagnostics
Once a regression model has been constructed it is important to confirm the goodness of fit of the model and the statistical significance of the estimated parameters. Commonly used checks of goodness of fit include R-squared, analyses of the pattern of residuals and construction of an ANOVA table. Statistical significance is checked by an F-test of the overall fit, followed by t-tests of individual parameters.

Estimation of model parameters
The parameters of a regression model can be estimated in many ways. The most common are


 * the method of least squares
 * the method of maximum likelihood and
 * Bayesian methods

For a model with normally distributed errors the method of least squares and the method of maximum likelihood coincide (see Gauss-Markov theorem).

Interpolation and extrapolation
Regression models predict a value of the $$y$$ variable given known values of the $$x$$ variables. If the prediction is to be done within the range of values of the $$x$$ variables used to construct the model this is known as interpolation. Prediction outside the range of the data used to construct the model is known as extrapolation and it is more risky.

Assumptions underpinning regression
Regression analysis depends on certain assumptions

1. The predictors must be linearly independent, i.e it must not be possible to express any predictor as a linear combination of the others. See Multicollinear.

2. The error terms must be normally distributed and independent.

3. The variance of the error terms must be constant.

Examples
To illustrate the various goals of regression, we will give three examples.

Prediction of future observations
The following data set gives the average heights and weights for American women aged 30-39 (source: The World Almanac and Book of Facts, 1975). We would like to see how the weight of these women depends on their height. We are therefore looking for a function $$\eta$$ such that $$Y=\eta(X)+\varepsilon$$, where Y is the weight of the women and X their height. Intuitively, we can guess that if the women's proportions are constant and their density too, then the weight of the women must depend on the cube of their height.



$$\vec{X}$$ will denote the vector containing all the measured heights ($$\vec{X}=(58,59,60,\dots)$$) and $$\vec{Y}=(115,117,120,\dots)$$ is the vector containing all measured weights. We can suppose the heights of the women are independent from each other and have constant variance, which means the Gauss-Markov assumptions hold. We can therefore use the least-squares estimator, i.e. we are looking for coefficients $$\beta_0, \beta_1$$ and $$\beta_2$$ satisfying as well as possible (in the sense of the least-squares estimator) the equation:


 * $$\vec{Y}=\beta_0 + \beta_1 \vec{X} + \beta_2 \vec{X}^3+\vec{\varepsilon}$$

Geometrically, what we will be doing is an orthogonal projection of Y on the subspace generated by the variables $$1, X$$ and $$X^3$$. The matrix X is constructed simply by putting a first column of 1's (the constant term in the model) a column with the original values (the X in the model) and a third column with these values cubed ($$X^3$$). The realization of this matrix (i.e. for the data at hand) can be written:

The matrix $$(\mathbf{X}^t \mathbf{X})^{-1}$$ (sometimes called "information matrix" or "dispersion matrix") is:

$$ \left[\begin{matrix} 1.9\cdot10^3&-45&3.5\cdot 10^{-3}\\ -45&1.0&-8.1\cdot 10^{-5}\\ 3.5\cdot 10^{-3}&-8.1\cdot 10^{-5}&6.4\cdot 10^{-9} \end{matrix}\right]$$

Vector $$\widehat{\beta}_{LS}$$ is therefore:

$$\widehat{\beta}_{LS}=(X^tX)^{-1}X^{t}y= (147, -2.0, 4.3\cdot 10^{-4})$$

hence $$\eta(X) = 147 - 2.0 X + 4.3\cdot 10^{-4} X^3$$



The confidence intervals are computed using:


 * $$[\widehat{\beta_j}-\widehat{\sigma}\sqrt{s_j}t_{n-p;1-\frac{\alpha}{2}};\widehat{\beta_j}+\widehat{\sigma}\sqrt{s_j}t_{n-p;1-\frac{\alpha}{2}}]$$

with:


 * $$\widehat{\sigma}=0.52$$
 * $$s_1=1.9\cdot 10^3, s_2=1.0, s_3=6.4\cdot 10^{-9}\;$$
 * $$\alpha=5\%$$
 * $$t_{n-p;1-\frac{\alpha}{2}}=2.2$$

Therefore, we can say that the 95% confidence intervals are:


 * $$\beta_0\in[112, 181]$$


 * $$\beta_1\in[-2.8, -1.2]$$


 * $$\beta_2\in[3.6\cdot 10^{-4}, 4.9\cdot 10^{-4}]$$

Software

 * All major statistical software packages, e.g. SAS System, SPSS, Minitab, or Stata, perform various types of regression analysis correctly and in a user-friendly way.
 * Simpler regression can be done in spreadsheets like MS Excel or OpenOffice.org Calc.
 * Experts can run complex types of regression using special programming languages like Mathematica, R programming language, STATA or Matlab.
 * There are a number of software programs that perform specialized forms of regression.
 * There are a number of web sites that allow online linear and nonlinear regression.