Well, I do recognize that by-now-classic insight, right at the
beginning of this diatribe: "There is no gravity; the earth sucks".
However, I think it goes downhill from there.
Let me pluck this one thing from the romantic rhapsody: I think I see
something like the scandalously dishonest use of an implicit (never
explicitly defined) concept of "consciousness" (maybe not even a
concept, maybe just a sentimental bias) pervading that new growth
industry, symposia on "brain and consciousness" (Searles, et al.)
Well, sorry for the convlouted sentence or whatever.
He seems to be saying, "you can't analyse or duplicate intelligence,
because you just can't. I don't care how intelligent it seems to be,
if it's not natural intelligence, it really isn't!" And then he goes
on about how wonderful it all is.
F. LeFever
In <7lrknh$jqv$1 at its.hooked.net> Bloxy's at hotmail.com (Bloxy's) writes:
>>In article <3781235b at news3.us.ibm.net>, "Sergio Navega"
<snavega at ibm.net> wrote:
>>F. Frank LeFever wrote in message
<7lr2jg$cdo at dfw-ixnews8.ix.netcom.com>...
>>>[commenting on Mentifex diagrams]
>>>I wouldn't bother looking beyond the diagrams he posts from time to
>>>time: they are "armchair" or "common sense" notions of how the brain
>>>"should" be organized that show absolutely no sign of being
influenced
>>>by what we know of actual brain organization on the basis of
>>>"experiments of nature" (e.g. lesions due to stroke), formal
laboratory
>>>studies, experimental cognitive psychology, etc., etc.
>>>>Seems to me I saw somebody's comment to the effect that AI systems
do
>>>not HAVE to mimic natural systems and can stand on their own; but
>>>unless more novel/elegant/interesting as pure creations than these
>>>simplistic diagrams seem to imply, I see no point in pursuing such
>>>schemes even as a hobby or game...
>>>>F. Frank LeFever, Ph.D.
>>>New York Neuropsychology Group
>>>I largely agree with this opinion. Some years ago, I used to think
>>that AI systems should pursue their own destinies, creating
algorithms
>>without regard to their "biological" counterparts (read: us), just
>>by knowing what is required to make a system intelligent. This is,
>>in fact, the predominant working model of the majority of the AI
>>researchers that came from Computer Science departments.
>>>However, it became clear to me that the problem of "intelligence" is
>>much, much more complex than our naive illusions led us to believe.
>>>Today I see no point in doing anything related to AI without a
>>strong biological plausibility. One line of argument is that we must
>>follow the path of the natural intelligences until we grasp what
>>are the core points of intelligence, because we *still* don't know
>>what they are. Only after that we will be able to "propose" new
>>methods and algorithms to enhance biological intelligence with
>>functionally equivalent
>>"Functionally equivalent" is a definition, inapplicable to
intelligence.
>It is the same absurd view of the world, based on a model
>of giant sucking machine.
>>Unless you can show that the playfulness, art, beauty,
>love and plenty of other aspects, are functional,
>there is no way to reduce intelligence to a function of ANY kind,
>as most exciting aspects of intelligence seem to be quite
>"useless" from the standpoing of maximization of the rate os sucking.
>>> (but better) processes.
>>And that is just a result of this continuous obsession with
>"improvements" of ALL THERE IS,
>never quite even beginning to conceive
>that it is forever the best it can possibly be,
>never quite comrehending the grandiour of it
>AS IT IS.
>>Before you can even begin to conceive the notions of
>"better" processes or whatever you can invent,
>first you need to learn to appreciate the glory,
>tremendous vastness and multidimensionality of
>ALL THERE IS.
>>You have not even begun to appreciate what you already posess
>and what is already available,
>and you don't have even a criteria
>opon which you can assert that one alternative is "better"
>than the other,
>because, first of all,
>the systems of logic and reasoning of yours
>are incomplete and could be quite misleading
>as to the very purpose of it all.
>>The rules of enclosure, or reduction of an opent system
>to a closed system, do apply to your reasoning,
>of which you don't even have any at the moment on this point.
>>"Better" is not universal criteria.
>It is custom tailored to each individual
>from the standpoint of improving the clarity of vision
>and getting in touch with the inner validity and grandior
>of each entity.
>>You are still using the outdated models,
>limiting the multi-dimensional ALL THERE IS
>into a single dimensional system of evaluation,
>which can not be proven to universally hold.
>>>This does not preclude experimentation: often we'll have to create
>>"strange" things, based on unlikely methods.
>>And often you are simply pooled into something.
>The "reasonable" observer may say you are craving for something
>crazy, endangering your own survival, "asking for trouble",
>or all sorts of things,
>but you persist.
>You can not resist that pool.
>>"In fact", creativity, as manifesting aspect of intelligence,
>is forever interested and looking at things "out of normal"
>scope.
>>They jump essentially into nothingness of the "future".
>There is absolutely no certainty that they will ever
>achive what they intiutively crave for.
>Quite often they even starve their entire life,
>pursuing that, which they can not avoid,
>as they still posess the most necessary element within,
>honesty, innosense and love [of life].
>>They can not prostitute that, which they FEEL as significant,
>for any amount of comfort, the others may posess.
>>> What is important is
>>not to lose the goal of obtaining plausible results,
>>Even a notion of "plausible results" is vague.
>>> comparing
>>our results with children's ways of learning and adult's methods
>>of tackling new problems.
>>Yes, children are a very good reference.
>They are still uncorrupted, pure.
>They still have that necessary innosense,
>quriocity, joy of inquiry, regardless of a "reward",
>that ugly trick of manipulation,
>programmed into these children's minds
>from the very early stage.
>>>In summary, the road to intelligent systems is too elusive for us
>>to waste time with implausible and risky methods: we ought to follow
>>the steps of our brain, the best example of intelligence on Earth.
>>And you need to expand that notion of a "brain".
>Right now it is just some kind of logic machine.
>The most interesting aspects are not even adressed.
>>>Sergio Navega.
>