Cohen's kappa

Overview
Cohen's kappa coefficient is a statistical measure of inter-rater reliability. It is generally thought to be a more robust measure than simple percent agreement calculation since &kappa; takes into account the agreement occurring by chance. Cohen's kappa measures the agreement between two raters who each classify N items into C mutually exclusive categories.

The equation for &kappa; is:


 * $$\kappa = \frac{\Pr(a) - \Pr(e)}{1 - \Pr(e)}, \!$$

where Pr(a) is the relative observed agreement among raters, and Pr(e) is the probability that agreement is due to chance. If the raters are in complete agreement then &kappa; = 1. If there is no agreement among the raters (other than what would be expected by chance) then &kappa; &le; 0.

The seminal paper introducing kappa as a new technique was published by Jacob Cohen in the journal Educational and Psychological Measurement in 1960.

Note that Cohen's kappa measures agreement between two raters only. For a similar measure of agreement (Fleiss' kappa) used when there are more than two raters, see Fleiss (1981).

Significance
Landis and Koch gave the following table for interpreting $$\kappa$$ values. This table is however by no means universally accepted; Landis and Koch supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful, as the number of categories and subjects will affect the magnitude of the value. The kappa will be higher when there are fewer categories.