Moravec's paradox

"Computers are at their worst trying to do the things most natural to humans." - Hans Moravec, Mind Children

Moravec's Paradox is a principle in artificial intelligence and robotics. It implies that normal intuitions about which problems are "easy" or "hard" do not apply to machines. That, in fact, what is easiest for people is hardest for machines, and what is hardest for people may be easy for machines. The principle was first articulated by Hans Moravec and others in the early 1980's to help explain why "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."

The biological basis of human skills
The explanation of the paradox is the theory of evolution. All human skills are implemented biologically, using machinery designed by the process of natural selection. In the course of their evolution, natural selection has preserved every design improvement and optimization. The older a skill is, the more time natural selection has had to improve the design. Abstract thought developed only very recently, and consequently, we should not expect its implementation to be particularly efficient.

As Moravec writes: “Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”

A compact way to express this argument would be:
 * We should expect the difficulty of reverse engineering any human skill to be roughly proportional to the amount of time that skill has been evolving in animals.
 * The oldest human skills are largely unconscious and so appear to us to be effortless.
 * Therefore, we should expect skills that appear effortless to be difficult to reverse engineer, but skills that require effort may not necessarily be difficult to engineer at all.

Some examples of skills that have been evolving for millions of years: recognizing a face, moving around in space, judging people’s motivations, catching a ball, recognizing a voice, setting appropriate goals, paying attention to things that are interesting; anything to do with perception, attention, visualization, motor skills, social skills and so on.

Some examples of skills that have appeared more recently: mathematics, engineering, human games, logic and much of what we call science. These are hard for us because they are not what our bodies and brains were primarily designed to do. These are skills and techniques that were designed recently, in historical time, and have had at most a few thousand years to be refined, mostly by cultural evolution.

Historical significance
In the early days of artificial intelligence research, leading researchers often predicted that they would be able to create thinking machines in a just a few decades. (see history of artificial intelligence). Their optimism stemmed in part from the fact that they had been successful at writing programs that used logic, solved algebra and geometry problems and played games like checkers and chess. Logic and algebra are difficult for people and are considered a sign of intelligence. They assumed that, having (almost) solved the "hard" problems, the "easy" problems of vision and commonsense reasoning would soon fall into place. They were wrong, of course, and one reason is that these problems are not easy at all, but incredibly difficult. The fact that they had solved problems like logic and algebra was irrelevant, because these problems are extremely easy for machines to solve.