The Chinese Room
Functionalism
is the doctrine that minds exist because, and only because, some
systems, usually but not necessarily human brains, perform certain
functions. The functions in question are of course all information
processing functions. According to this theory, you have a mind
because, and only because your brain processes certain information in
certain ways. According to this theory, you have the particular mind
you do because your particular brain processes information in its own
particular way. Your personality, your preferences, desires, hopes,
fears and dreams are all encoded in the structure of your brain. You
exist as the person you are because your brain works the way it does.
Functionalism
is actually a pretty obvious implication of the mind brain identity
theory. Mind brain identity theory holds that minds only exist because
brains do what they do. From this it follows that whenever a brain does
the kinds of things that make a mind exist, a mind will exist. A
healthy, normally functioning human brain will produce a conscious,
thinking mind whenever it performs those normal functions. But if minds
exist whenever brains do those brain-functions, minds will also exist
whenever anything does those functions, even if the thing during the
functions is not itself a human brain. So, if mind brain identity
theory is true, functionalism is necessarily also true.
An
important objection to functionalism was articulated by the philosopher
John Searle. Searle
points out that there is a difference between computation and
semantics. To see this difference, think about the difference between
the speech recognition software I am using to write my materials for
this class, and a human stenographer who transcribes dictation. Both
systems take in a stream of words but where the mechanical system could
not possibly understand anything that I say, the human stenographer
will understand absolutely everything. This is a vitally important
difference. The only reason the stenographer can understand what I
dictate is because the stenographer has a mind, and understanding is
one of the functions of the mind. The mechanical system cannot possibly
understand the words it is hearing, and so it cannot possibly have a
mind.
Searle's
objection is that computers and computer-like systems will only ever be
able to do the kinds of things that speech recognition systems do,
which is mechanically convert strings of symbols into other strings of
symbols without understanding anything. They will never be able to do
what human beings do, which is convert strings of symbols into forms of
conscious understanding. There is a story about the emperor Caligula,
and how he died. Supposedly, Caligula would periodically call for the
captain of his guard and give him a list of people to be executed that
day. One day, Caligula decided that the captain of the guard should
himself be executed. Unfortunately for him, he issued the execution
order in his usual manner. The Captain of the guard read the list, saw
his own name on it, and decided to kill Caligula instead of having
himself executed. (It doesn't really matter how Caligula died, as long
as it was painful.) The point here is that, if Searle is correct, a
robot Captain of the guard would not have recognized that his name on
the list meant that he would be executed, and so a robot Captain of the
guard would have passed on the order without understanding its meaning.
Another
way to understand Searle's objection is to consider the issue of what
is called the "Turing Test." The Turing test is basically a failed
attempt to determine the appropriate criterion for deciding when a
computer is producing a mind. Alan Turing, inventor of the device we
now call a "computer" and founder of modern computational theory, was
once asked when computers would become conscious. ("Consciousness" is
not the same as "mind," but that doesn't matter here.) Turing replied
that computers would be conscious when human beings could not tell the
difference between a computer and another human being purely on the
basis of verbal output. Imagine that you are in a room with nothing but
a computer terminal. The screen suddenly displays the word "hello," and
you reply by typing in a response. A conversation follows. Now imagine
that there are two possibilities. The first is that the computer is
connected to another terminal being operated by a human being. The
second is that the computer is connected to another computer that is
running some piece of software designed to make you think that you are
interacting with a human being. Turing's answer to the question of when
computers would be conscious was to say that computers would be
conscious when you couldn't tell the difference between a computer and
a human being on the other end of a communication system like this.
The
problem with the Turing test is that it is possible to write programs
that people cannot distinguish from human beings without those programs
being conscious. A program called PC Therapist II regularly fools
people into thinking that it is a real human being. It does so by
taking its input and repackaging it in the form of "active listening"
questions. Although it sounds like it understands, all it is doing is
taking keywords from its input and reshuffling them into thoughtful
looking sentences. Although some people, especially those who know how
this program works, can distinguish between the software and a human
being it seems clear that the software could be expanded to avoid the
giveaways, and to supply more and more outputs on the type that
characterize real human thinking. And so I at least think it is
possible to write a sophisticated program that takes input that it does
not understand, re shuffles and processes it in various ways to create
mindless output that is indistinguishable from the kind of output
produced by real thinking human beings. I think that if the programmers
are bound and determined to create a program that absolutely no one
will be able to distinguish from a real program, they will eventually
be able to do it, and they will be able to do it without actually
making the program actually able to understand any of its inputs. In
other words, I think the Turing test is too easy.
John
Searle believes that no computer program will ever be able to produce a
mind because, he thinks, computers cannot ever understand the symbols
they manipulate. He bases his argument on the following thought
experiment. Imagine there is a man who does not understand written
Chinese, but who is very good at recognizing and remembering symbols,
and at looking things up based on symbols that are incomprehensible to
him. This man is placed in a room with several thousand numbered books.
This room is separated from the outside world by a door with a small
slit in it. Messages are passed in through this slit. These messages
are written in Chinese. The man does not read Chinese, but he is able
to look up symbols. When he gets a message he looks up the first symbol
in book number one. When he finds that symbol, he also finds next to it
the number of another book in which to look up the second symbol. This
next book directs him to another book, and so on until finally some
book or combination of books directs him to write down a new sequence
of Chinese characters, none of which he understands. He then passes
this response out through the slit.
Outside
of the room there is a woman who does understand Chinese. The messages
she writes and passes into the room are general knowledge questions,
written in Chinese. Because of the way information is encoded in the
enormous library of books in the room, the messages written but not
understood by the man in the room constitute answers to those general
knowledge questions. Searle's argument rests on the claim that it is
obvious that the Chinese room does absolutely everything that a
computer ever could do in the way of processing symbols according to
rules, and that it is obvious to the room does not understand Chinese.
And, it may even be true that the room passes the Turing test, because
the woman outside might come to believe that the room contains a human
being who actually understands Chinese. But this is not so, because the
man in the room would not know the difference between a set of
translation rules that gave meaningful answers to serious questions,
and a set of translation rules that returned absolute gibberish. We can
imagine someone sneaking in and replacing a couple of the books so that
the room starts producing nonsense. The woman outside would notice the
difference because she understands Chinese, but the man in the room
would have absolutely no idea that anything was amiss. From this,
Searle draws the conclusion that mere computations can never produce a
mind because mere computations can never constitute understanding in
the sense that the woman outside understands Chinese, and the man
inside doesn't.
If you'd like more information:
In Defence of Strong AI: Semantics from Syntax
Essay Topic Question:
Does the Chinese Room thought experiment refute
functionalism? Think this through from all sides before
you start writing. Come up with reasons for and against the idea that
this argument refutes functionalism, and figure out for yourself
whether you think it does or not. When you've thunk it through, write
an odyssey paper
explaining all your thinking, especially what you think, why you think
it, and why you don't agree with the opposing arguments.
Hints:
-
Does Searle ever define meaning?
-
Does Searle ever explain how human brains establish meaning?
- What is meaning, anyway?
-
Does Searle ever show that the Chinese Room represents the only way a computer can work?
-
Does Searle prove that the Chinese Room represents the only way a computer can interact with its environment?
-
Does Searle prove that computer programs can never do the same things as neurons?
-
Does Searle prove that computer logic structures can never do the same things as arrangements of neurons?
-
Does Searle prove that computers can never do the exact things that human brains do to create meaning?
Copyright © 2018 by Martin C. Young
Use your browser's "back" button to return to your previous screen.