Let me reply to both Franks and Pierre-Normands posting.
"F. Frank LeFever" wrote:
> >The whole issue seems to depend on what is meant with "understanding".
> >In a functional sense the Chinese room does understand. In the sense
> >in which the term is commonly used, it doesn't, because understanding
> >a question in this sense requires concious thinking about it, i.e.
> >grasp of its semantical content.
>> This is a good example of "defining" a term by reference to another
> undefined term ("passing the buck").
Yes, that's exactly what definitions are all about. If you require
further definitions of all terms used in a definition you will not
only make defining a really difficult task but also be either caught
in an infinite regress or circularity.
> "Grasp" means what? What is it
> that "grasp" means that transcends what the Chinese room has done?
I won't define "grasp", but I'll explain what I mean with "grasp of
semantic content" to make clear my point. (It is my argument, I don't
know if Searle ever argued along these lines.)
If we forget about the man in the Chinese room for now - since he can
easily be replaced by a simple robot - what the room contains is:
- a set of syntactic rules
- a set of basic propositions (relations, I would suppose)
- a mechanism to process input according to the rules
Thus, if I ask whether snow is green, the "room" might analyze my
question as asking for the truth value of the proposition 'snow is
green'. Then it might find the propositions 'snow is white' and 'If
something is white it is not green' among its basic propositions, as
well as Modus Ponens among its rules and yield the correct answer:
"no".
The room's concept of 'snow' is exhausted by the place this concept
occupies in the set of propositions and rules. Therefore, the concept
can be described as an existantially quantified conjunction of
propositions:
There exists an X such that (X is called "snow" & X is an element of
the class of objects which are called "white" & which reflect light in
such-and-such a way & [...] & [...])
This transformation of a sentence is called "Ramseification" [1][2].
Similarly, the room's entire system of rules and propositions is a
huge ramsified conjunction of existentially generalized expressions.
Now let's see what is missing.
The trouble is that, given a finite capacity of the room, there are
infinitely many different possible worlds that match the ramseified
system. This is a variant of Quine's familiar proof that every theory
is underdetermined by observation: Likewise, the world is
underdetermined by any finite set of existentially quantified
propositions. After all, all that is required is a certain complex
relationship among the objects of the world - no matter what these
objects are.
Put simply, in one of the worlds that instantiate the room's "belief
system", "snow" might refer to something which is like what we call
"grass" and "white" to something like our "red". The room can't tell
the difference between "snow is green" - meaning that snow is green in
our world - and "snow is green" - meaning that grass is red in the
other world.
In an important sense the room thus cannot tell snow from grass and
green from red. But this, I would say, is crucial for "understanding"
the sentence "snow is green".
So I agree with your description of semantics:
> "semantics" was that branch of semiotics which
> dealt with relating symbols to objects. In its neuropsychological
> usage, "semantic networks" are the interrelations among semantic
> elements, e.g. associations among different sensory aspects of our
> experience with a given object, among these and different aspectts of
> our activities involving these objects, etc., etc.
Indeed, what "grounds" human belief systems are relations between
concepts and non-syntactic properties, e.g. we know that snow looks
like "this" and feels like "that", where "this" and "that" refer to
sensory experiences ("phenomenal qualities"). This is exactly why men
do grasp the semantic content of sentences while the room doesn't.
I know an elderly woman who is blind from birth on and who likes to
knit. When she was educated in the 1950s she learned a lot about
colours, most remarkably an impressive set of "match"-sentences like
"green doesn't match to yellow", "every colour matches to black", etc.
She simply learned these sentences by heart, and when she choses wool
to knit she always pays respect to this knowledge - although she is
very confused by the current fashion which has abandoned most of her
rules... :-)
I'm inclined to say that this woman doesn't understand the sentence
"green doesn't match to yellow" in much the same way in which the room
doesn't understand any sentence.
Pierre-Normand Houle wrote:
> Our ordinary concept of "understanding" is not being applied to rooms
> because rooms do not usually answer questions asked to them. Why
> wouldn't the concept be applicable rooms had they such an ability? Is
> the phenomenon of a room answering questions intelligently really more
> remarkable than the phenomenon of a human being doing so just
> using a skullfull of wet circuitry?
As I said in my last posting there are two concepts of
"understanding": One is a functional sense in which it means something
like being able to answer questions or execute commands. Nobody doubts
that the room can do that.
The other is the sense described above. And as long as the room's
belief system isn't "grounded" by some sort of sensory system but
instead remains entirely at syntactic manipulation of symbols it
doesn't understand anything in this sense.
> Granted that Searle may not refer to a homunculus in the sense of a
> little "thinker" within the thinker (I don't know; I have made no
> effort to be familiar with his entire corpus; indeed, not. even with
> more than a small sample of it--I've heard him speak and read one or
> two articles), but clearly he seems to think there is SOMETHING "in
> there" (even if it is 90% of the brain circuits or even 100% of them)
> which in principle (i.e., not empirically, but a priori) CANNOT be "in"
> a computer.
no no. For him consciousness is a higher-order property of the (whole)
brain - "in the same harmless sense of 'higher-order' or 'emergent' in
which liquidity is a higher-order, emergent property of H20 molecules
in certain conditions" [3]. As for computer systems he holds that the
information they are processing is in itself meaningless. It's only
because we interpret the symbols that they get their content. I am not
sure if he denies that connectionist systems can in principle be able
to "understand", at least I think he shouldn't deny this.
Best wishes,
Wolfgang.
[1] David Lewis: "How to define theoretical terms", in his
Philosophical Papers, vol.1, New York: OUP, 1983
[2] David Lewis: "Psychophysical and Theoretical Identifications",
in Readings in Philosophy of Psychologie, ed. by N.Block,
vol.1, Cambridge: Harvard UP, 1980
[3] translated back into English from p.29 of: John Searle: "Die
Wiederentdeckung des Geistes", Munich: Artemis & Winkler, 1993
--
homepage: http://www.wald.org/wolfgang
"Wo kaemen wir hin, wenn jeder sagte: 'wo kaemen wir hin?' und keiner
ginge, um zu sehen, wohin wir kaemen, wenn wir gingen?"