I thank Wolfgang for his thoughtful development of these issues, and
generally agree with him.
I would, however, not make a dichotomy between the hypothetical room
and humans simply on the basis of its poverty of semantic associations
(lack of tactile information about snow, etc.); some of us humans have
richer semantic networks than others. His own example of the blind
woman calls to mind Helen Keller, with her greatly restricted range of
the possible sensory components of her semantic networks.
He seems, however, to be making a dichotomy on a more troubling basis:
ability to "feel" (and/or have sensory "experiences", appreciate
"phenomenal qualities", etc.), by which he clearly means something more
than ability to "use tactile information" (or any other information via
any other sensory input). ((Historical note: Hughlings Jackson said he
used "sensory" to refer to impulses coming to the higher neurla
centers, and "motor" to refer to those going out of it, but did NOT
mean to assert that one was associated with "sensation" and the other
not.))
This is the great MYSTERY, which may be too fundamental to be resolved:
our private "experience". We can tell other people that we have it,
and other people can tell us that they have it, but we can do so only
by saying something like "it's the way you feel when you put your hand
in cold water" (etc., etc.)--i.e. by reference to some external
mutually observable event.
We cannot go beyond this to give them the "information" which would
GIVE them the same experience, the same "sensation" that we have. It
is a matter of faith that what I "feel" when my hand is in cold water
is what YOU feel when YOUR hand is in cold water.
Knowing SOMETHING about what neural structures must be intact for
people to "feel" (or SAY that they "feel"), we can doubt that any
computer so far constructed "feels"; but for Searles and others to
assert that nothing but a living brain can EVER, "feel" or "be
conscious" is in itself a leap of faith, which unfortunately is rearely
made explicit. Searles did ALLUDE to this problem of definition of
"consciousness" early in his presentation to the ARNMD meeting last
December, but I waited in vain for him to explicitly address this
problem; he didn't thatt day.
F. LeFever
In <37890F18.38747016 at zedat.fu-berlin.de> Wolfgang Schwarz
<wschwarz at zedat.fu-berlin.de> writes:
>>>Let me reply to both Franks and Pierre-Normands posting.
>>"F. Frank LeFever" wrote:
>>> >The whole issue seems to depend on what is meant with
"understanding".
>> >In a functional sense the Chinese room does understand. In the
sense
>> >in which the term is commonly used, it doesn't, because
understanding
>> >a question in this sense requires concious thinking about it, i.e.
>> >grasp of its semantical content.
>>>> This is a good example of "defining" a term by reference to another
>> undefined term ("passing the buck").
>>Yes, that's exactly what definitions are all about. If you require
>further definitions of all terms used in a definition you will not
>only make defining a really difficult task but also be either caught
>in an infinite regress or circularity.
>>> "Grasp" means what? What is it
>> that "grasp" means that transcends what the Chinese room has done?
>>I won't define "grasp", but I'll explain what I mean with "grasp of
>semantic content" to make clear my point. (It is my argument, I don't
>know if Searle ever argued along these lines.)
>>If we forget about the man in the Chinese room for now - since he can
>easily be replaced by a simple robot - what the room contains is:
>- a set of syntactic rules
>- a set of basic propositions (relations, I would suppose)
>- a mechanism to process input according to the rules
>>Thus, if I ask whether snow is green, the "room" might analyze my
>question as asking for the truth value of the proposition 'snow is
>green'. Then it might find the propositions 'snow is white' and 'If
>something is white it is not green' among its basic propositions, as
>well as Modus Ponens among its rules and yield the correct answer:
>"no".
>>The room's concept of 'snow' is exhausted by the place this concept
>occupies in the set of propositions and rules. Therefore, the concept
>can be described as an existantially quantified conjunction of
>propositions:
>There exists an X such that (X is called "snow" & X is an element of
>the class of objects which are called "white" & which reflect light in
>such-and-such a way & [...] & [...])
>This transformation of a sentence is called "Ramseification" [1][2].
>>Similarly, the room's entire system of rules and propositions is a
>huge ramsified conjunction of existentially generalized expressions.
>>Now let's see what is missing.
>>The trouble is that, given a finite capacity of the room, there are
>infinitely many different possible worlds that match the ramseified
>system. This is a variant of Quine's familiar proof that every theory
>is underdetermined by observation: Likewise, the world is
>underdetermined by any finite set of existentially quantified
>propositions. After all, all that is required is a certain complex
>relationship among the objects of the world - no matter what these
>objects are.
>>Put simply, in one of the worlds that instantiate the room's "belief
>system", "snow" might refer to something which is like what we call
>"grass" and "white" to something like our "red". The room can't tell
>the difference between "snow is green" - meaning that snow is green in
>our world - and "snow is green" - meaning that grass is red in the
>other world.
>In an important sense the room thus cannot tell snow from grass and
>green from red. But this, I would say, is crucial for "understanding"
>the sentence "snow is green".
>>So I agree with your description of semantics:
>>> "semantics" was that branch of semiotics which
>> dealt with relating symbols to objects. In its neuropsychological
>> usage, "semantic networks" are the interrelations among semantic
>> elements, e.g. associations among different sensory aspects of our
>> experience with a given object, among these and different aspectts
of
>> our activities involving these objects, etc., etc.
>>Indeed, what "grounds" human belief systems are relations between
>concepts and non-syntactic properties, e.g. we know that snow looks
>like "this" and feels like "that", where "this" and "that" refer to
>sensory experiences ("phenomenal qualities"). This is exactly why men
>do grasp the semantic content of sentences while the room doesn't.
>>I know an elderly woman who is blind from birth on and who likes to
>knit. When she was educated in the 1950s she learned a lot about
>colours, most remarkably an impressive set of "match"-sentences like
>"green doesn't match to yellow", "every colour matches to black", etc.
>She simply learned these sentences by heart, and when she choses wool
>to knit she always pays respect to this knowledge - although she is
>very confused by the current fashion which has abandoned most of her
>rules... :-)
>I'm inclined to say that this woman doesn't understand the sentence
>"green doesn't match to yellow" in much the same way in which the room
>doesn't understand any sentence.
>>Pierre-Normand Houle wrote:
>>> Our ordinary concept of "understanding" is not being applied to
rooms
>> because rooms do not usually answer questions asked to them. Why
>> wouldn't the concept be applicable rooms had they such an ability?
Is
>> the phenomenon of a room answering questions intelligently really
more
>> remarkable than the phenomenon of a human being doing so just
>> using a skullfull of wet circuitry?
>>As I said in my last posting there are two concepts of
>"understanding": One is a functional sense in which it means something
>like being able to answer questions or execute commands. Nobody doubts
>that the room can do that.
>The other is the sense described above. And as long as the room's
>belief system isn't "grounded" by some sort of sensory system but
>instead remains entirely at syntactic manipulation of symbols it
>doesn't understand anything in this sense.
>>> Granted that Searle may not refer to a homunculus in the sense of a
>> little "thinker" within the thinker (I don't know; I have made no
>> effort to be familiar with his entire corpus; indeed, not. even with
>> more than a small sample of it--I've heard him speak and read one or
>> two articles), but clearly he seems to think there is SOMETHING "in
>> there" (even if it is 90% of the brain circuits or even 100% of
them)
>> which in principle (i.e., not empirically, but a priori) CANNOT be
"in"
>> a computer.
>>no no. For him consciousness is a higher-order property of the (whole)
>brain - "in the same harmless sense of 'higher-order' or 'emergent' in
>which liquidity is a higher-order, emergent property of H20 molecules
>in certain conditions" [3]. As for computer systems he holds that the
>information they are processing is in itself meaningless. It's only
>because we interpret the symbols that they get their content. I am not
>sure if he denies that connectionist systems can in principle be able
>to "understand", at least I think he shouldn't deny this.
>>Best wishes,
>>Wolfgang.
>>[1] David Lewis: "How to define theoretical terms", in his
> Philosophical Papers, vol.1, New York: OUP, 1983
>[2] David Lewis: "Psychophysical and Theoretical Identifications",
> in Readings in Philosophy of Psychologie, ed. by N.Block,
> vol.1, Cambridge: Harvard UP, 1980
>[3] translated back into English from p.29 of: John Searle: "Die
> Wiederentdeckung des Geistes", Munich: Artemis & Winkler, 1993
>>--
>homepage: http://www.wald.org/wolfgang>"Wo kaemen wir hin, wenn jeder sagte: 'wo kaemen wir hin?' und keiner
>ginge, um zu sehen, wohin wir kaemen, wenn wir gingen?"
>>