Matthews Correlation Coefficient

The Matthews Correlation Coefficient is used in machine learning as a measure of the quality of binary (two class) classifications. It takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes. It returns a value between -1 and +1. A coefficient of +1 represents a perfect prediction, 0 an average random prediction and -1 the worst possible prediction.

While there is no perfect way of describing the confusion matrix of true and false positives and negatives by a single number, the Matthews Correlation Coefficient is generally regarded as being one of the best such measures.

Other measures, such as the proportion of correct predictions, are not useful when the two classes are of very different sizes. For example, assigning every object to the larger set achieves a high proportion of correct predictions, but is not generally a useful classification.

$$ MCC = \frac{\mathit{t}_p\mathit{t}_n - \mathit{f}_p\mathit{f}_n}{\sqrt{(\mathit{t}_p + \mathit{f}_p)(\mathit{t}_p + \mathit{f}_n)(\mathit{t}_n + \mathit{f}_p)(\mathit{t}_n + \mathit{f}_n)}} $$

In this equation, tp is the number of true positives, tn the number of true negatives, fp the number of false positives and fn the number of false negatives. If any of the four sums in the denominator is zero, the denominator can be arbitrarily set to one; this results in a Matthews Correlation Coefficient of zero, which can be shown to be the correct limiting value.