IUBio

It's primitive; it's dumb (PLAUSIBLE definitions?)

Gary Forbis forbis at accessone.com
Tue Jul 13 17:28:24 EST 1999


Jim Balter <jqb at sandpiper.net> wrote in message
news:378B4CD6.2DE84249 at sandpiper.net...
> "F. Frank LeFever" wrote:
> > Knowing SOMETHING about what neural structures must be intact for
> > people to "feel" (or SAY that they "feel"), we can doubt that any
> > computer so far constructed "feels"; but for Searles and others to
> > assert that nothing but a living brain can EVER, "feel" or "be
> > conscious" is in itself a leap of faith, which unfortunately is rearely
> > made explicit.
>
> Searle makes clear that he doesn't assert this.  But he does repeatedly
> state that he has proven that a machine that "feels" or "understands"
> cannot do so by virtue of computation alone.  The intellectual
> poverty of the philosophical community is illustrated by the
> fact that Searle isn't hounded out of the room whenever he makes
> such a proof claim; just imagine if Andrew Wyles had gone around
> claiming that he had proved Fermat's Last Theorem while that
> proof was still unsettled.  And yet philosophers seem to feel
> no embarrassment about such overwhelming methodological ineptitude.

I don't understand why you care about Searle's assertion.  I can understand
why you've want to correct errors in what he asserts but I don't see how
Searle's assertion has any thing to say about anything important to AI.

It seem intuitively obvious to me that a machine does not feel or understand
by virtue of computation alone just as it is obvious to me that a ball
cannot
be made to fly by virtue of computation alone.  What's the big deal?  Should
I care if a machine feels or understands while doing things I want it to do?





More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net