Loss function

In statistics, decision theory and economics, a loss function is a function that maps an event (technically an element of a sample space) onto a real number representing the economic cost or regret associated with the event.

Loss functions in economics are typically expressed in monetary terms. For example:


 * $$ \$ = \frac{\mathrm{loss}}{\mathrm{time\ period}}. $$

Other measures of cost are possible, for example mortality or morbidity in the field of public health or safety engineering.

Loss functions are complementary to utility functions which represent benefit and satisfaction. Typically, for utility U:


 * $$\ \mathrm{loss} = k - U $$

where k is some arbitrary constant.

Expected loss
A loss function satisfies the definition of a random variable so we can establish a cumulative distribution function and an expected value. However, more commonly, the loss function is expressed as a function of some other random variable. For example, the time that a light bulb operates before failure is a random variable and we can specify the loss, arising from having to cope in the dark and/or replace the bulb, as a function of failure time.

The expected loss (sometimes known as risk) is:


 * $$\Lambda = \int_{-\infty}^\infty \!\!\lambda(x)\, f(x)\, \mathrm{d}x$$

where:
 * &lambda;(x) = the loss function
 * x = a continuous random variable
 * f(x)= the probability density function

Minimum expected loss (or minimum risk) is widely used as a criterion for choosing between prospects. It is closely related to the criterion of maximum expected utility.

Loss functions in Bayesian statistics
One of the consequences of Bayesian inference is that in addition to experimental data, the loss function does not in itself wholly determine a decision. What is important is the relationship between the loss function and the prior probability. So it is possible to have two different loss functions which lead to the same decision when the prior probability distributions associated with each compensate for the details of each loss function.

Combining the three elements of the prior probability, the data, and the loss function then allows decisions to be based on maximizing the subjective expected utility, a concept introduced by Leonard J. Savage.

Regret
Savage also argued that using non-Bayesian methods such as minimax, the loss function should be based on the idea of regret, i.e. the loss associated with a decision should be the difference between the consequences of the best decision that could have been taken had the underlying circumstances been known and the decision that was in fact taken before they were known.

Quadratic loss function
The use of a quadratic loss function is common, for example when using least squares techniques or Taguchi methods. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is
 * $$\lambda(x) = C |t-x|^2 \; $$

for some constant C; often the value of the constant makes no difference to a decision, and can then be ignored by setting it equal to 1.