Concept learning

Concept learning refers to a learning task in which a human or machine learner is trained to classify objects by being shown a set of example objects along with their class labels. In the machine learning literature, this task is more typically called supervised learning or supervised classification, in contrast to unsupervised learning or unsupervised classification, in which the learner is not provided with class labels. Colloquially, this task is known as learning from examples.

The Theoretical Issues
The theoretical issues underlying concept learning are those underlying induction in general. These issues are addressed in many diverse literatures, including Version Spaces, Statistical Learning Theory, PAC Learning, Information Theory, and Algorithmic Information Theory. Some of the broad theoretical ideas are also discussed by Watanabe (1969,1985), Solomonoff (1964a,1964b), and Rendell (1986).

Modern Psychological Theories of Concept Learning
It is difficult to make any general statements about human (or animal) concept learning without already assuming a particular psychological theory of concept learning. Although the classical views of concepts and concept learning in philosophy speak of a process of abstraction, data compression, simplification, and summarization, currently popular psychological theories of concept learning diverge on all these basic points.

Exemplar Theories of Concept Learning
Exemplar theories of concept learning are those in which the learner is hypothesized to store the provided training examples verbatim, without creating any abstraction or reduced representation (e.g., rules). In machine learning, algorithms of this type are also known as instance learners or lazy learners. The best known exemplar theory of concept learning is the Generalized Context Model, developed by Nosofsky. Some variations are listed below.

Bayesian Theories of Concept Learning
Bayesian theories are those which directly apply normative probability theory to achieve optimal learning. They generally base their categorization of data on the posterior probability for each category, where for category $$i$$, this posterior is given by Bayes rule,



P(C_{i}|D) = \frac{P(D|C_{i})P(C_{i})}{P(D)} $$

where $$P(D|C_{i})$$ is the probability of observing the given data on the assumption it was generated from category $$C_{i}$$, $$P(C_{i})$$ is the prior probability of category $$C_{i}$$, and $$P(D)$$ is the marginal probability of observing the data, which usually does not enter into consideration. In general, the category possessing the maximum posterior $$P(C_{i}|D)$$ would be category selected for the given data. The details of these methods can be fairly complex, as they require assumptions about the prior probability of categories and the generation of data from the categories.

The best known Bayesian theory of concept learning is ACT-R, developed by John R. Anderson. Other approaches have been offered by Tenenbaum.

Rule-Based Theories of Concept Learning
To be added....

Compression-Based Theories of Concept Learning

 * Prototype Theory
 * Minimum description length theories
 * Mixture model theories

Explanation-Based Theories of Concept Learning
To be added...

Network Theories of Concept Learning
To be added...

Machine Learning Approaches to Concept Learning
Unlike the situation in Psychology, the problem of concept learning within machine learning is not one of finding the "right" theory of concept learning, but one of finding the most effective method for a given task. As such, there has been a huge proliferation of concept learning theories. Here we simply list a sampling:


 * To be added