Jim Balter wrote:
>> M.C.Harrison wrote:
> >
> > Brian J Flanagan wrote:
> > >
> > > > >To the extent that that is true, computation is irrelevant to
> > > > >cognition.
> > > BJ: And you have determined this ... how?
> >
> > Um, not an expert on this myself, but...
> >
> > Isn't there a mathematical proof by someone that a syntactically derived
> > system is capable only of certain things, and is inevitably stumped by a
> > particular set of problems when confronted with them?
> >
> > The Chinese room, thing.
> >
> > You know, when an englishman is replying to chinese questions according
> > to a set of books that define how to answer, as each question is put
> > under the door, the englishman refers to a book which tells him the
> > answer to give back. Can this englishman be said to understand chinese?
>> Would it matter, in terms of how you remember or write about this
> in the future, if I were to point out that "someone" (Godel)
> proved something quite different from your description, and that
> this is a quite different matter from Searle's Chinese Room?
> I.e., your lack of expertise is in fact manifested by being wrong
> on every significant point?
So you don't like either argument? You could say so with
counter-arguments, rather than by being offensive.
I'd have considered quoting two notions correctly didn't constitute a
series of significant points which could be considered in any sense "my
errors", perhaps the two notions are wrong, in which case my sources
would be wrong. They are, as you say, notions from others.
I don't see how they can be wrong, as the first isn't stated explicitly,
and the second is a self-referential description, which asks a question
rather than stating a proposition. Explain how this lack of expertise in
AI computing can cause a lack of information which renders the chinese
room notion invalid - I'd like to see that.
> Note that the claim that the English speaker (Searle is not an
> "englishman") does not understand Chinese is not something that
> is supposed to *follow* from some proof, but rather is a claim Searle
> makes that he expects his readers to agree with, and most do, even if
It's part of the definition of the terms - the englishman must not
understand chinese or the situation becomes less interesting.
I suggest the situation has an underlying assumption that
"consciousness" is suggested to be the difference between manipulating
symbols syntactically and comprehending the symbols as meaningful
abstracts of life experiences. I don't know if this is entirely
justified.
If it is, then a computer program running in a computer will always
remain a manipulator of symbols and never comprehend them, if it is
built in the traditional way as a combination of hardware, software, and
error-correction.
If "consciousness" is considered to be an emergent property of
sufficiently complex syntactic processes, then naturally this can be
modelled inside a traditional computer.
> only for the sake of argument. What Searle holds *follows* from the
> claim that the English speaker does not understand Chinese, is that
> a computer cannot understand Chinese merely by virtue of following
> instructions from a book. (The standard "systems response" to this
> is that the Chinese Room, as a system, might "understand" Chinese,
> whatever that unformalized notion might mean, even if the English
> speaker does not.) Searle does not make any claim that the computer
> or Chinese Room is "capable only of certain things"; his position is
If they're not in the book, the symbols are meaningless to the
englishman.
It's inherent in the story, that part. Whether this is what Godel proved
mathematically, dunno. I dislike maths.
> the computer nonetheless lacks understanding. This is quite different
> from Roger Penrose's position -- he does think that something is lacking,
> and that Godel's theorems can be used to show this, but his
> argument is anything but a "mathematical proof", and his work has
> been strongly and ably refuted by numerous respected theorists, including
> Penrose's own mathematical tutor, Solomon Feferman (although
> these refutations seem to have little influence on what people *believe*
> about what has or has not been shown).
Penrose does produce interesting notions. Last time the conference was
on he was studying bacteria and microtubules. I don't know him as a
mathematician though.
> > I'll take a stab at this. In principle, my elderly computer was
> > perfectly capable of producing the same answer to a given question as my
> > spanking new PII, except that it takes quite a lot longer to get to the
> > answer.
> There are at least two problems with this: one is that many problems
> are defined in terms of real-world time constraints; a machine that is
> too slow to respond before the next clock tick will follow a different
> execution path (like printing "interrupt timeout" and stopping).
Not if it's been properly programmed and designed it won't. Any computer
which displayed different behaviour, would be said to have a bug. The
programmers would write software for specific functions and not
generally write software to incorporate unintentional side effects.
While this could lead to AI, it would be entirely random and undefined,
and since evolution tends not to favour bug-ridden computers, this is
hardly more likely to evolve AI than the average cat. A lot less likely,
I'd suggest.
Maybe quantum computers or something can get round this aspect of
computers, but I wouldn't know about that at the moment.
> > This is not a completely satisfactory answer, because win95 checks to
> > see what cpu I've got and won't run on an 8088,
> That's a bit like saying that my Chinese book checks my nationality
> and and won't let an American read it -- that's not quite how it works.
This would be precisely right. The computer equivalent of a nationality
check in the Chinese room - the American would be able to replace the
Englishman, as you suggest, merely that the American would produce
results slower.
> > And if a rock can be sentient, it would be better to let the computer
> > decide for itself what to think, rather than putting in a program which
> > permits no thoughts except those written in stone and coerced using
> > error correction. Else, what you see is what the programmer told it to
> > say, not what it is saying. This changes but little if the program can
> > evolve, it's still a program rather than AI.
> It might be well to factor your own lack of expertise into your faith in your own > analysis.
And it might be as well for you to do precisely the same thing. Well,
unless you are a deity or something, which I have no reason to think is
the case.
Do you think that computers have intentionality? Microbes seem to have
intentionality, and they're far easier to construct.