Philosophy of artificial intelligence

The philosophy of artificial intelligence concerns such questions as: The first question defines the terms of the debate. The next three questions reflect the divergent interests of AI researchers, cognitive scientists and philosophers, respectively. The last question is discussed in a sister article on the ethics of artificial intelligence.
 * What is intelligence?  How can one recognize its presence and applications?
 * Is it possible for machines to exhibit intelligence?
 * Is the human brain essentially a computer?
 * Can a machine have a mind, mental states and consciousness in the same sense that we do?
 * Is creating human-like artificial intelligence moral? What  ethical stances should they take?  What ethical stances should humans take toward them?

Important propositions in the philosophy of AI include the following.

Turing's "polite convention": The founding premise of AI: The physical symbol system hypothesis: Hobbes Mechanism: Searle's Weak AI Hypothesis: Searle's Strong AI Hypothesis: These positions are concerned with the relationship between five concepts: intelligence, minds (and mental states), brains, machines and physical symbol systems. Defining each of these terms is part of understanding their relationships.
 * If a machine acts as intelligently a human being, then it is as intelligent as a human being.
 * Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.
 * A physical symbol system has the necessary and sufficient means of general intelligent action.
 * Reason is nothing but reckoning.
 * A physical symbol system can act intelligently.
 * A physical symbol system can have a mind and mental states.

Intelligence
Consider these questions: The difference is how we understand the words: is "thinking" like "swimming" -- something that human beings do by definition? Or is it possible to define "thinking" without reference to human beings so that we can determine if our machines are doing it? Unfortunately, there is no standard definition of intelligence.
 * "Can machines fly?" This is true, since airplanes fly.
 * "Can machines swim?" This is false, because submarines don't swim.
 * "Can machines think?" This is the question we need to answer. Is it like the first or like the second?

Turing Test
Alan Turing attempted to answer the question "Can machines think?" in his famous and seminal "Computing machinery and intelligence". The paper reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question put to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human.

Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks." Turing's test extends this polite convention to machines:


 * If a machine acts as intelligently as human being, then it is as intelligent as a human being.

The power of the Turing test derives from the fact that it is possible to talk about anything. Turing wrote "the question and answer method seems to be suitable for introducing almost any one of the fields of human endeavor that we wish to include." John Haugeland adds that "understanding the words is not enough; you have to understand the topic as well." In order to pass a well designed Turing test, the machine would have to use natural language, to reason, to have knowledge and to learn. The test can be extended to include video input, as well as a "hatch" through which objects can be passed, and this would force the machine to demonstrate the skill of vision and robotics as well. Together these represent almost all the major problems of artificial intelligence.

Russell and Norvig note that "AI researchers have devoted little attention to passing the Turing Test", since there are easier ways to test their programs: by giving them a task directly, rather than through the roundabout method of first posing a question in a chat room populated with machines and people. Turing never intended his test to be used as a real, day-to-day measure of the intelligence of AI programs. He wanted to provide a clear and understandable example to help us discuss the philosophy of artificial intelligence. Real Turing tests, such as the Loebner prize, don't usually force programs to demonstrate the full range of intelligence and are reserved for testing chatterbot programs.

Human intelligence vs. intelligence in general
One criticism of the Turing test is that it is explicitly anthropomorphic. If our ultimate goal is to create machines that are more intelligent than people, why should we insist that our machines must closely resemble people? Russell and Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.'" AI founder John McCarthy has always argued against human measures of intelligence, and said in a recent speech "artificial intelligence is not, by definition, simulation of human intelligence"

Recent AI research defines intelligence in terms of rational agents or intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent. This definition has the advantage that it does not distinguish between humans and machines.
 * If an agent acts so as maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent.

The basic premise of AI

 * Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.

Working AI researchers are primarily interested in this question: is it possible to create a machine that can solve all the problems we solve using our intelligence? This question defines the scope of what machines will be able to do in the future and guides the direction of AI research. AI researchers are far less concerned with the issues raised by computationalism or Searle's strong AI.

Arguments against the basic premise must show that building a working AI system is impossible, because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for thinking and yet can't be duplicated by a machine (or by the methods of current AI research).

Arguments in favor of the basic premise must show that such a system is possible. The most convincing demonstration would be to build one. In this way, while attacking the basic premise is a philosophical problem, defending it is an engineering problem.

Symbol systems vs. machines
An important issue is the distinction between symbol systems and machines. A physical symbol system (also called a formal system) takes physical objects (symbols), combines them into structures (expressions) and manipulates them (using processes) to produce new expressions.

The basic premise of AI was restated in terms of physical symbol systems by Alan Newell and Herbert Simon in a 1963 paper:
 * A physical symbol system has the sufficient means of general intelligent action.

In the simplest possible sense, all computer programs are symbol systems since they manipulate the binary symbols of one and zero. In fact, the Church-Turing thesis implies:
 * A (sufficiently complex) physical symbol system can accurately duplicate the behavior of any other physical symbol system.

However a distinction is usually made between the kind of high level symbols that directly correspond with objects in the world, such as and and the more complex "symbols" that are present in a machine like a neural network. This distinction between high level symbol manipulating programs and general "machines" like neural networks will become very important, since many of the critiques of AI only apply to this kind of high level symbol manipulation.

Lucas, Penrose and Godel
Gödel's incompleteness theorems imply that some propositions are forever beyond the reach of any system that follows formal rules. A human being, however, can (with some thought) see the truth of these "Gödel statements". In 1961 John Lucas argued that this showed that human reason would always be superior to machines. He wrote "Gödel's theorem seems to me to prove that mechanism is false, that is, that minds cannot be explained as machines."
 * There are statements that no physical symbol system can prove.

Roger Penrose expanded on this argument in his 1989 book The Emperor's New Mind, where he speculated that quantum mechanical processes inside individual neurons gave humans this special advantage over machines.

Responses to Lucas and Penrose
Russell and Norvig note that Gödel's argument only applies to idealized machines, like Turing machines that have an infinite amount of memory. Real machines are always finite, and so Gödel's argument does not apply. In fact, machines with a finite amount of memory are equivalent to first order predicate logic and so are decidable.

Douglas Hofstadter, in his Pulitzer prize winning book Gödel, Escher, Bach, explains that these "Gödel-statements" always refer to the system itself, similar to the way the Epimenides paradox uses statements that refer to themselves, such as "this statement is false" or "I am lying". But, of course, the Epimenides paradox applies to anything that makes statements, whether they are machines or humans, even Lucas himself. Consider: This statement is true but can't be asserted by Lucas. This shows that Lucas is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless.
 * Lucas can't assert the truth of this statement.

Dreyfus and Heidegger: The primacy of unconscious skills
Hubert Dreyfus argued that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation.

Dreyfus identified two different kinds of skills, which he called "knowing-that" and "knowing-how" (based on Heidegger's distinction of present-at-hand and ready-to-hand). Knowing-that uses logic, language and symbols and Dreyfus agreed that a physical symbol system may be able to imitate it. Knowing-how is a form of contextually circumscribed guessing that allows us to arrive at an answer or take an action without using conscious symbolic reasoning at all, as when we recognize a face, drive ourselves to work or find the right thing to say to a troubled friend. (Malcolm Gladwell would later name this "fast" process of thinking as a "blink" in a bestseller of the same name. )

"Knowing-how" requires that we use all of our unconscious intuitions, attitudes and knowledge about the world. This context or "background" (related to Heidegger's Dasein) is a form of knowledge that is not stored in our brains symbolically, but intuitively. It affects what we notice and what we don't notice, what we expect and what possibilities we don't consider. (Gladwell calls this "thin-slicing").

Dreyfus claimed that no physical symbol system, as they were implemented in the 70s and 80s, could capture this background or do the kind of fast problem solving, or blinking, that it allows. This, he claimed, showed that some aspects of human intelligence don't depend on symbol manipulation, refuting the physical symbol system hypothesis.

Responses to Dreyfus
Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and intelligence, where he had classified this as the "argument from the informality of behavior." Turing argued in response that, just because we don't know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'"

Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern Dreyfus' background, for example, active vision is addressing the problem of directing sensors towards those aspects of the environment that are most "interesting" or "useful" using a theory of "information value".

The situated movement in robotics research also attempts to capture our unconscious skills at perception and attention.

In fact, since Dreyfus first published his critiques in the 60s, AI research in general has moved away from high level symbol manipulation or "GOFAI", towards new models that are intended to capture more of our unconscious reasoning. Historian and AI researcher Daniel Crevier wrote that it was "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."

Computationalism: brains are computers

 * Reason is nothing but reckoning

Computationalism asserts that, at some level, human brains are computers. This issue is of primary importance to cognitive scientists, who study the nature of human thinking and problem solving.

For AI researchers, if computationalism can be shown to be true, then it provides strong evidence that the basic premise of AI is true and suggests that research should focus on duplicating human brain functions.

Strong AI vs. weak AI

 * See also: Strong AI, where the term "strong AI" is used to describe a system with artificial general intelligence.


 * A physical symbol system can have a mind and mental states.

The "strong AI hypothesis" and "weak AI hypothesis" are the names of two contrasting philosophical interpretations of a what a successful artificial intelligence program really represents. Weak AI claims only that it is possible (and useful) to build a system with intelligence. Strong AI agrees, but goes on to claim that such a system would actually have a mind, mental states or consciousness in the same way people do.

The terms were introduced by philosopher John Searle in his 1980 paper Mind, Brains and Programs, where he wrote: "I find it useful to distinguish what I will call 'strong' AI from 'weak' or 'cautious' AI (artificial intelligence). According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."

Searle's weak AI hypothesis is a version of the basic premise of AI, the only significant difference being that Searle's weak AI only claims that machines can perform some intelligent behaviors, whereas the basic premise of AI claims that machines can perform any intelligent behavior. The weak AI claim is almost trivially true: machines have been demonstrating some intelligent behavior since at least 1956, when Newell and Simon wrote Logic Theorist.

Searle introduced to the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He wants to say that even if we assume that we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered. Strong AI is related to the hard problem of consciousness, the mind body problem, the problem of other minds and other difficult questions in the philosophy of mind. Strong AI is primarily of concern to philosophers.

Many AI researchers dismiss strong AI as being uninteresting or perhaps even meaningless. Russell and Norvig write: "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis." AI founder Marvin Minsky said that Searle “misunderstands, and should be ignored.”

Searle's Chinese Room
John Searle asks us to consider a thought experiment: suppose we have written a computer program that passes the Turing Test and demonstrates "general intelligent action." Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands chinese? That is, is there anything that has the mental state of understanding, or which has conscious awareness of what is being discussed in Chinese? The man is clearly not aware. The room can't be aware. The cards certainly aren't aware. Searle concludes that the Chinese room, or any other physical symbol system, can not have a mind.

Searle goes on to argue that actual mental states and consciousness require specific "causal properties of the brain." He is not a dualist, rather he believes that there is something special about brains and neurons that gives rise to minds: in his words "brains cause minds."