Substitution model

A substitution model describes the process from which a sequence of characters of a fixed size from some alphabet changes into another set of traits. For example, in cladistics, each position in the sequence might correspond to a property of a species which can either be present or absent. The alphabet could then consist of "0" for absence and "1" for presence. Then the sequence 00110 could mean, for example, that a species does not have feathers or lay eggs, does have fur, is warm-blooded, and cannot breathe underwater. Another sequence 11010 would mean that a species has feathers, lays eggs, does not have fur, is warm-blooded, and cannot breathe underwater. In phylogenetics, sequences are often obtained by firstly obtaining a nucleotide or protein sequence alignment, and then taking the bases or amino acids at corresponding positions in the alignment as the characters. Sequences achieved by this might look like AGCGGAGCTTA and GCCGTAGACGC.

Substitution models are used for a number of things:
 * 1) Constructing evolutionary trees in phylogenetics or cladistics.
 * 2) Simulating sequences to test other methods and algorithms.

Neutral, independent, finite sites models
Most substitution models used to date are neutral, independent, finite sites models.
 * Neutral : Selection does not operate on the substitutions, and so they are unconstrained.
 * Independent : Changes in one site do not affect the probability of changes in another site.
 * Finite Sites : There are finitely many sites, and so over evolution, a single site can be changed multiple times. This means that, for example, if a character has value 0 at time 0 and at time t, it could be that no changes occurred, or that it changed to a 1 and back to a 0, or that it changed to a 1 and back to a 0 and then to a 1 and then back to a 0, and so on.

The molecular clock and the units of time
Different substitution models deal with time differently.
 * It is very common to measure time in substitutions. For example, if one was going to construct a phylogenetic tree while employing a substitution model, one could just measure the distance along the branches of the trees in substitutions. This is convenient because it avoids any question of whether the rate of substitution with respect to the unit of time has changed or not (because by definition the number of substitutions per substitution is one), and it does not need any information about timescales that could be called into question.
 * The molecular clock assumption is also very common, namely that the rate of substitutions with respect to time is constant. This is just multiplying factor (usually called $$\mu$$, the number of substitutions per unit time) different from measuring time in substitutions. To carry out this type of analysis, one needs to estimate $$\mu$$ first (which requires knowledge of at least one branch length ahead of time, often a difficult task, which can easily be disputed by others).
 * The assumption of a molecular clock is often unrealistic, especially across long periods of evolution. For example, even though rodents are genetically very similar to primates, they have undergone a much higher number of substitutions in the estimated time since divergence in some regions of the genome . This could be due to their shorter generation time, higher metabolic rate, increased population structuring, increased rate of speciation, or smaller body size . When studying events like the Cambrian explosion under a molecular clock assumption, poor concurrence between cladistic and phylogenetic data is often observed. There has been some work on models allowing variable rate of evolution (see for example and ).

Time-reversible models
Most useful substitution models are time-reversible. In terms of substitution models, this simply means that over time, the relative frequencies of each character do not change.

For a time reversible model, there is no assumption that substitutions preferentially change in certain directions over time. For example A -> C -> G is the same as G -> C -> A.

The reason for this is because when an analysis of real biological data is performed, there is generally no access to the sequences of ancestral species, only to the species present today. However, when a model is time-reversible, which species was the ancestral species is irrelevant. Instead, the phylogenetic tree can be rooted along the branch leading to any arbitrary extant species, re-rooted later based on new knowledge, or left unrooted.

A time reversible model satisfies the following properly $$ \pi_1Q_{12} = \pi_2Q_{21} $$

The mathematics of substitution models
Neutral, independent, finite sites models (assuming a constant rate of evolution) have two parameters, $$\Pi$$, a vector of base (or character) frequencies at time zero (for a time reversible model, this vector is usually referred to as the equilibrium base frequencies, and applies at all times), and the rate matrix, Q, which describes the rate at which bases of one type change into bases of another type, $$Q_{ij}$$ for $$ i \ne j$$ is the rate at which base i goes to base j. For convenience, the diagonals of the Q matrix are chosen so the rows sum to zero.

$$ Q_{ii} = - {\sum_{i\ne j} Q_{ij}} $$

The transition matrix function is a function from the branch lengths(in some units of time, possibly in substitutions), to a matrix of conditional probabilities. It is denoted $$P(t)$$ The entry in the i-th column and the j-th row ($$P_{ij}(t)$$) is the probability, after time t, that there is a base j at a given position, conditional on there being a i in that position at time 0. When the model is time reversible, this can be performed between any two sequences, even if one is not the ancestor of the other, if you know the total branch length between them.

The asymptotic properties of $$P_{ij}(t)$$ are such that $$\lim_{t \rightarrow 0} P_{ij}(t) = \Pi_{i}$$, i.e. there is no change in base composition between a sequence and itself, and $$\lim_{t \rightarrow \infty} P_{ij}(t) = \Pi_{j}$$, or in other words, as time goes to infinity, the probability of finding base j at a position given there was an i at that position originally goes to the probability that there is base j at that position (regardless of the original base).

The transition matrix can be computed from the rate matrix and the equilibrium base frequencies by $$P(t) = e^{Qt}$$. Since Q is a matrix, this is a matrix exponential, and must be approximated by the Taylor series expansion $$P(t) = \sum_{n=0}^{\infty}{Q^n {{t^n} \over {n!}}}$$.

The time reversibility(or stationarity) constraint is $$\Pi Q = 0$$ (because the rows where defined to sum to zero, and the overall base frequencies must not systematically change from $$\Pi$$). This is equivalent to saying $$ \Pi P(t) = \Pi $$ for all t.

GTR: Generalised time reversible
GTR is the most general neutral, independent, finite-sites, time-reversible model possible. It was first described in a general form by Simon Tavaré in 1986.

The GTR parameters consist of an equilibrium base frequency vector, $$\Pi = (\pi_1 \pi_2 \pi_3 \pi_4)$$, giving the frequency at which each base occurs at each site, and the rate matrix


 * $$Q = \begin{pmatrix} {-(x_1+x_2+x_3)} & x_1 & x_2 & x_3 \\ {\pi_1 x_1 \over \pi_2} & {-({\pi_1 x_1 \over \pi_2} + x_4 + x_5)} & x_4 & x_5 \\ {\pi_1 x_2 \over \pi_3} & {\pi_2 x_4 \over \pi_3} & {-({\pi_1 x_2 \over \pi_3} + {\pi_2 x_4 \over \pi_3} + x_6)} & x_6 \\ {\pi_1 x_3 \over \pi_4} & {\pi_2 x_5 \over \pi_4} & {\pi_3 x_6 \over \pi_4} & {-({\pi_1 x_3 \over \pi_4} + {\pi_2 x_5 \over \pi_4} + {\pi_3 x_6 \over \pi_4})} \end{pmatrix} $$

Therefore, GTR (for four characters, as is often the case in phylogenetics) requires 6 substitution rate parameters, as well as 4 equilibrium base frequency parameters. However, this is usually eliminated down to 9 parameters plus $$\mu$$, the overall number of substitutions per unit time. When measuring time in substitutions ($$\mu$$=1) only 9 free parameters remain.

In general, to compute the number of parameters, you count the number of entries above the diagonal in the matrix, i.e. for n trait values per site $${{n^2-n} \over 2} $$, and then add n for the equilibrium base frequencies, and subtract 1 because $$\mu$$ is fixed. You get


 * $${{n^2-n} \over 2} + n - 1 = {1 \over 2}n^2 + {1 \over 2}n - 1.$$

For example, for an amino acid sequence (there are 20 "standard" amino acids that make up proteins), you would find there are 209 parameters. However, when studying coding regions of the genome, it is more common to work with a codon substitution model (a codon is three bases and codes for one amino acid in a protein). There are $$4^3 = 64$$ codons, resulting in 2079 free parameters, but when the rates for transitions between codons which differ by more than one base are assumed to be zero, then there are only $${{20 \times 19 \times 3} \over 2} + 64 - 1 = 633$$ parameters.

Parametrical vs. empirical models
A main difference in evolutionary models is how many parameters are estimated every time for the data set under consideration and how many of them are estimated once on a large data set.

Parametrical models describe all substitution as a function of a number of parameters which are estimated for every data set analyzed, preferably using maximum likelihood. This has the advantage that the model can be adjusted to the particularities of a specific data set (e.g. different composition biases in DNA). Problems can arise when too many parameters are used, particularly if they can compensate for each other. Then it is often the case that the data set is too small to yield enough information to estimate all parameters accurately.

Empirical models are created by estimating many parameters (typically all entries of the rate matrix and the character frequencies, see the GTR model above) from a large data set. These parameters are then fixed and will be reused for every data set. This has the advantage that those parametes can be estimated more accurately. Normally, it is not possible to estimate all entries of the substitution matrix from the current data set only. On the downside, the estimated parameters might be too generic and don't fit a particular data set well enough.

With the large-scale genome sequencing still producing very large amounts of DNA and protein sequences, there is enough data available to create empirical models with any number of parameters. Because of the problems mentioned above, the two approaches are often combined, by estimating most of the parameters once on large-scale data, while a few remaining parameters are then adjusted to the data set under consideration. The following sections give an overview of the different approaches taken for DNA, protein or codon-based models.

Models of DNA substitution
See main article: Models of DNA evolution for more formal descriptions of the DNA models.

Models of DNA evolution were first proposed in 1969 by Jukes and Cantor, assuming equal transition rates as well as equal equilibrium frequencies for all bases. In 1980 Kimura introduced a model with two parameters: one for the transition and one for the transversion rate and in 1981, Felsenstein made a model in which the substitution rate corresponds to the equilibrium frequency of the target nucleotide. Hasegawa, Kishino and Yano (HKY) unified the two last models to a six parameter model. In the 1990s, models similar to the HKY one have been developed and refined by several researchers (e.g. and ).

For DNA substitution models, mainly parametrical models (as described above) are employed. The small number of parameters to estimate makes this feasible, but also DNA is often highly optimized for specific purposes (e.g. fast expression or stability) depending on the organism and the type of gene, making it necessary to adjust the model to these circumstances.

Models of amino acid substitutions
For many analyses, particularly for longer evolutionary distances, the evolution is modeled on the amino acid level. Since not all DNA substitution also alter the encoded amino acid, information is lost when looking at amino acids instead of nucleotide bases. However, several advantages speak in favor of using the amino acid information: DNA is much more inclined to show compositional bias than amino acids, not all positions in the DNA evolve at the same speed (non-synonymous mutations are more likely to become fixed in the population than synonymous ones), but probably most important, because of those fast evolving positions and the limited alphabet size (only four possible states), the DNA suffers much more from back substitutions, making it difficult to accurately estimate longer distances.

Unlike the DNA models, amino acid models traditionally are empirical models. They were pioneered in the 1970s by Dayhoff and co-workers , by estimating replacement rates from protein alignments with at least 85% identity. This minimized the chances of observing multiple substitutions at a site. From the estimated rate matrix, a series of replacement probability matrices were derived, known under names such as PAM250. The Dayhoff model was used to assess the significance of homology search results, but also for phylogenetic analyses. The Dayhoff PAM matrices were based on relatively few alignments (since not more were available at that time), but in the 1990s, new matrices were estimated using almost the same methodology, but based on the large protein databases available then (Gonnet et al., 1990 and Jones et al., 1992, the latter being known as "JTT" matrices).