Checking if a coin is fair

In statistics, a fair coin is an idealized randomizing device with two states (usually named "heads" and "tails") which are equally likely to occur. It is based on the ubiquitous coin flip used in sports and other situations where it is necessary to give two parties the same chance of winning. Depending on the occasion a specially designed chip or a simple currency coin is used, which due to unequal weight distribution might be "unfair": one state might occur more frequently than the other, giving one party an unfair advantage. So it might be necessary to experimentally determine whether the coin is in fact "fair" – that is, if the probability of the coin falling on either side in the toss is approximately 50%. It is of course impossible to ever definitively rule out arbitrarily small deviations from fairness such as might be expected to affect only one flip in a lifetime of flipping, and it is always possible for an unfair (or "biased") coin to happen to turn up exactly 10 heads in 20 flips. As such, any fairness test must only establish a certain degree of confidence in a certain degree of fairness (a certain maximum bias). In more rigorous terminology, the problem is of determining the parameters of a Bernoulli process, given only a limited sample of Bernoulli trials.

Preamble
This article describes experimental procedures for determining if a coin is fair. There are many statistical methods for analyzing such an experimental procedure. This article illustrates two of them.

Both methods prescribe an experiment (or trial) in which the coin is tossed many times and the result of each and every toss is recorded. A statistical analysis of the results can then be performed to decide if the coin is "fair" or "probably not fair".


 * Posterior probability density function. This method assumes that the number of tosses is fixed and not under the experimenter's direct control. The true probability of obtaining a particular side when a fair coin is tossed (the "prior distribution") is already known. The probability that this particular coin is a "fair coin" can then be obtained by integrating the posterior PDF over the relevant interval.


 * Estimator of true probability. This method assumes that the experimenter can decide and implement any number of coin tosses for this particular coin. The experimenter decides on the level of confidence required and the tolerable margin of error. These considerations determine the minimum number of tosses that must be performed to complete the experiment.

Posterior probability density function
One way of verifying this is to calculate the posterior probability density function of Bayesian probability theory.

A test is performed by tossing the coin n times and noting the number of heads h and tails t:


 * H = h (Total number of heads is h)
 * T = t (Total number of tails is t)
 * N = n = h + t (Total number of tosses is n)

Next, let r be the actual probability of obtaining heads in a single toss of the coin. This is the value desired. Using Bayes' theorem, posterior probability of r conditional on H and T is expressed as follows:


 * $$ f(r | H=h, T=t) =

\frac {\Pr(H=h | r, N=h+t) \, f(r)} {\int_0^1 \Pr(H=h |r, N=h+t) \, f(r) \, dr}. \!$$

The prior summarizes what is known about the distribution of r in the absence of any observation. We could assume (but it would be rather ridiculous) that the prior distribution of r is uniform over the interval [0, 1]. That is, f(r) = 1. In fact, we ought to use a prior distribution that reflects our experience with real coins.

The probability of obtaining h heads in n tosses of a coin with a probability of heads equal to r is given by a binomial distribution:


 * $$ \Pr(H=h | r, N=h+t) = {h+t \choose h} \, r^h \, (1-r)^t. \!$$

Putting it all together:



f(r | H=h, T=t) = \frac{{h+t \choose h}\,r^h\,(1-r)^t} {\int_0^1 {h+t \choose h}\,r^h\,(1-r)^t\,dr} = \frac{r^h\,(1-r)^t}{\int_0^1 r^h\,(1-r)^t\,dr} . $$

This is in fact a beta distribution (the conjugate prior for the binomial distribution), whose denominator can be expressed in terms of the beta function:


 * $$f(r | H=h, T=t) = \frac{1}{\mathrm{B}(h+1,t+1)} \; r^h\,(1-r)^t. \!$$

If a uniform prior is assumed, and because h and t are integers, this can also be written in terms of factorials:


 * $$f(r | H=h, T=t) = \frac{(h+t+1)!}{h!\,\,t!} \; r^h\,(1-r)^t. \!$$

Example
For example, let n=10, h=7, i.e. the coin is tossed 10 times and 7 heads are obtained:


 * $$ f(r | H=7, T=3) = \frac{(7+3+1)!}{7!\,\,3!} \; r^7 \, (1-r)^3 = 1320 \, r^7 \, (1-r)^3 \!$$

The graph on the right shows the probability density function of r given that 7 heads were obtained in 10 tosses. (Note: r is the probability of obtaining heads when tossing the same coin once.)



The probability for an unbiased coin



\Pr(0.45 < r <0.55) = \int_{0.45}^{0.55} f(r | H=7, T=3) \,dr \approx 13\% \!$$

is small when compared with alternative hypothesis (a biased coin). However, it is not small enough to cause us to actually believe that the coin has a significant bias. Using a prior distribution that reflects our prior knowledge of what a coin is and how it acts, the posterior distribution would not favor the hypothesis of bias.

Estimator of true probability
To determine the number of times a coin should be tossed, two vital pieces of information are required:


 * 1) The confidence level which is denoted by confidence interval (Z)
 * 2) The maximum (acceptable) error (E)


 * The confidence level is denoted by Z and is given by the Z-value of a standard normal distribution. This value can be read off a standard score statistics table for the normal distribution.


 * The maximum error (E) is defined by $$|p - p_{\mathrm{actual}}| < E $$ where $$p\,\!$$ is the estimated probability of obtaining heads. Note: $$p_{\mathrm{actual}}\,\!$$ is the same actual probability (for obtaining heads) as the term $$r\,\!$$ of the previous section in this article.


 * In statistics, the estimate of a proportion of a sample (denoted by p) has a standard error (standard deviation of error) given by:


 * $$s_p = \sqrt{ \frac {p \, (1-p) } {n} }$$

This standard error $$s_p$$ will have a maximum theoretical value if $$p = (1-p) = 0.5$$.

Hence, assuming the worse case , $$p$$ is set to 0.5 to get the maximum possible value of $$s_p$$.

And hence the value of maximum error (E) is given by

Therefore, the final formula for the number of coin tosses for the estimator $$p\,\!$$ is


 * $$E = \frac {Z}{2 \, \sqrt{n}} \quad \quad \mbox{or} \quad \quad n = \frac {Z^2} {4 \, E^2} \!$$

provided that $$n \cdot p \ge 5 $$ and $$n \cdot q \ge 5 $$ where $$q = (1-p)\, $$ to satisfy the Central Limit Theorem.

Example
1. If a maximum error of 0.01 is desired, how many times should the coin be tossed?


 * $$n = \frac {Z^2} {4 \, E^2} = \frac {Z^2} {4 \times 0.01^2} = 2500 \ Z^2$$


 * $$n = 2500\, $$ at 68.27% level of confidence (Z=1)
 * $$n = 10000\, $$ at 95.45% level of confidence (Z=2)
 * $$n = 27225\, $$ at 99.90% level of confidence (Z=3.3)

2. If the coin is tossed 10000 times, what is the maximum error of the estimator $$p\,\!$$ on the value of $$r\,\!$$ (the actual probability of obtaining heads in a coin toss)?


 * $$E = \frac {Z}{ 2 \, \sqrt{n} }$$
 * $$E = \frac {Z}{ 2 \, \sqrt{ 10000 } } = \frac {Z}{ 200 } $$
 * $$E = 0.0050\, $$ at 68.27% level of confidence (Z=1)
 * $$E = 0.0100\, $$ at 95.45% level of confidence (Z=2)
 * $$E = 0.0165\, $$ at 99.90% level of confidence (Z=3.3)

3. The coin is tossed 12000 times with a result of 5961 heads (and 6039 tails). What interval does the value of $$r\,\!$$ (the true probability of obtaining heads) lie within if a confidence level of 99.999% is desired?


 * $$p = \frac{h}{h+t} \, = \frac{5961}{12000} \, = 0.4968 $$

Now find the value of Z corresponding to 99.999% level of confidence.
 * $$Z = 4.4172 \,\! $$

Now calculate E


 * $$ E = \frac{Z}{2 \, \sqrt{n}} \, = \frac{4.4172}{2 \, \sqrt{12000}} \, = 0.0202 $$

The interval which contains r is thus:


 * $$ p - E < r < p + E \,\! $$


 * $$ 0.4766 < r < 0.5169 \,\!$$

Hence, 99.999% of the time, the interval above would contain $$r\,\!$$ which is the true value of obtaining heads in a single toss.

Other applications
The above mathematical analysis for determining if a coin is fair can also be applied to other uses. For example:


 * Determining the product defective rates of a product when subjected to a particular (but well defined) condition. Sometimes a product can be very difficult or expensive to produce. Furthermore if testing such products will result in their destruction, a minimum amount of products should be tested. Using the same analysis the probability density function of the product defect rate can be found.


 * Two party polling. If a small random sample poll is taken where the there are only two mutually exclusive choices, then this is equivalent to tossing a single coin multiple times using a bias coin. The same analysis can therefore be applied to determine actual voting ratio.


 * Finding the proportion of females in an animal group. Determining the gender ratio in a large group of an animal species. Provided that a very small random sample is taken when performing the random sampling of the population, the analysis is similar to determining the probability of obtaining heads in a coin toss.