Test sensitivity

Test sensitivity is the ability of a test to detect individuals with a disease. Sensitivity is a posteriori analysis; it compares the results of a test with a gold standard. It is important to notice that sensitivity is meant to ascertain test performance and not really to diagnose a disease. If a test is used to make a diagnosis then a test with high sensitivity is used to rule out diseases. Since most tests performed are negative, a test with high sensitivity that is negative provides more information than a test that is negative and has low sensitivity and/or high specificity. However, sensitivity has little practical value, it is not very helpful in determining the presence of disease, in this case, the positive predictive value of a test is more useful. The formula to find out the sensitivity can be found elsewhere (sensitivity (tests)).

Measures of Test Performance
Besides sensitivity and specificity, there are other ways to define test performance, e.g.:
 * Accuracy
 * Precision
 * Kappa coefficient
 * Diagnostic Odds Ratio
 * Error Odds Ratio
 * Youden's J Statistic

Measures of test performance have to be differentiated from the diagnostic value of a test but these coefficients have to taken into account when used in medical decision making.

Test sensitivity, Diagnosis, and Treatment
Tests with high sensitivity have many false positive tests, while this may be of use for a screening test is not useful at all when a diagnosis is needed. Making a definitive diagnosis will imply establishing a treatment and some treatments carry a risk themselves. Most tests have high specificity while few are very sensitive, in practice, tests with high sensitivity are more useful than tests with high specificity.

Test sensitivity and test specificity
A perfect test will have 100% sensitivity, this never happens because sensitivity changes as specificity changes so the resulting sensitivity is a trade off with an acceptable specificity. If a test is made 100% sensitive, the specificity will be most likely be close to zero. This relationship, between sensitivity and specificity, is very well explained in a plot called the receiver operating characteristic (ROC) curve. In this plot, sensitivity is plotted against 1-specificity, which is the false positive rate, the point in this graph that approaches or is close to the 100% sensitivity mark with the fewest false positives will be the best cut off for sensitivity and specificity.