NeuroEvolution of Augmenting Topologies

NeuroEvolution of Augmenting Topologies (NEAT) is a neuroevolution technique&mdash;a genetic algorithm for evolving artificial neural networks&mdash;developed by Ken Stanley while at The University of Texas at Austin. It notably evolves both network weights and structure, attempting to balance between the fitness and diversity of evolved solutions. It is based on three key ideas: tracking genes with historical markings to allow crossover between different topologies, protecting innovation via speciation, and building topology incrementally from an initial, minimal structure ("complexifying").

Performance
On simple control tasks, the NEAT algorithm often converges to effective networks more quickly than a variety of other contemporary neuro-evolutionary techniques and reinforcement learning methods.

Complexification
Conventionally, neural network topology is chosen by a human experimenter, and a genetic algorithm is used to select effective connection weights. The topology of such a network stays constant throughout this weight selection process.

The NEAT approach begins with a perceptron-like feed-forward network of only input and output neurons. As evolution progresses, the topology of the network may be augmented by either adding a neuron along an existing connection, or by adding a new connection between previously unconnected neurons.

Implementation
The original implementation by Ken Stanley is published under the GPL. It integrates with Guile, a GNU scheme interpreter. This implementation of NEAT is considered the base reference for implementations of the NEAT algorithm.

rtNEAT
In 2003 Stanley devised an extension to NEAT that allows evolution to occur in real time rather than through an iteration of generations as used by most genetic algorithms. The basic idea is to put the population under constant evaluation with a "lifetime" timer on each individual in the population. When a network's timer expires its current fitness measure is examined to see whether it falls near the bottom of the population, and if so it is discarded and replaced by a new network bred from two high-fitness parents. A timer is set for the new network and it is placed in the population to participate in the ongoing evaluations.

The first application of rtNEAT is a video game called Neuro-Evolving Robotic Operatives, or NERO. In the first phase of the game, individual players deploy robots in a 'sandbox' and train them to some desired tactical doctrine. Once a collection of robots has been trained, a second phase of play allows players to pit their robots in a battle against robots trained by some other player, to see how well their training regimens prepared their robots for battle.

Phased Pruning
An extension of Ken Stanley's NEAT, developed by Colin Green, adds periodic pruning of the network topologies of candidate solutions during the evolution process. This addition addressed concern that unbounded automated growth would generate unnecessary structure.