In article <3658d07b.0 at ns2.wsg.net>, "Ray Scanlon" <rscanlon at wsg.net> wrote:
>A thinking machine will not be built for reasons of cost.
Yes. Plus size, mobility, multi-dimensionality of the input sensors,
reliability, flexibility, and a long list of other things.
Our mechanical models are just a joke compared to a real thing.
All wee "see" is a needle size opening on the life with these
artificial "intelligent" gadgets of ours.
We are not even close to comprehending what we already have
inside us.
> We can examine the
>human neural net. An explanation of how the neural net works will serve as a
>design for a thinking machine, a design not to be implemented.
You see, once you start following the biological implementation
and attempt to duplicate it in mechanical system, you will see
in no time how stupid that mechanical system is.
Either we attempt to incorporate the biological components
into the system, or we are bound to see how limited, bulky
and inflexible our mechanical models are.
And once you start incorporating the biological components,
the question will arise, why do you need to make a fake copy
of the real thing?
>We see no place for the predicate calculus or Turing machines. The various
>approaches of artificial intelligence have stalled, connectionism is lost in
>pattern recognition. It is the wiring of the neural net that holds promise.
The problem with current models, they are all simply single aspect,
most primitive models, that are gross oversimplifications of the real
thing.
Our current models can not deal with complex multi-dimensional input
and are unable to incorporate diverse aspects into the same system.
They are all limited to one simple task of being able to "recognize"
a particular pattern in one specific application.
Furthermore, new input effectively overrides the existing knowledge,
instead of expanding it. So, effectively, we have no learning
abilities WHATSOEVER.
We can create a stupid gadget to recognize patterns in a specific
input stream, but even new input often falls outside the scope of
recognizable patterns.
We can not expand our current models and incorporate different
aspects on input stream. We don't have the models that can operate
in multi-discipline environment.
Actually, we hardly have ANYTHING of ANY value.
>A natural fallout from neural net explanation and design will be a
>demonstration that consciousness is not needed in a material universe. This
>makes cognitive science moot.
Yes, and this is the very root problem.
We don't even have concepts of consciousness, purpose or intent
in our existing models. It is utterly unclear what are we trying
to achieve or adress with all this stupid mechanical gadgets, as
the very purpose of those seem to be utterly abscent beyond simple
mechanical manipulations of matter for the purpose of exploitation.
And we don't have much left to exploit.
>We cannot envision the universe or anything in it except as an observer. We
>can go round in circles as long as we please, we cannot envision soul (mind,
>self).
Yes. And we do not even accept the very validity of such concepts
as the very essense of the being.
The whole thing is just a fatalistic materialism,
the way we have it at the moment.
A bunch of monkey dudes, trying to program god into stupid
silicon chip.
What an obscenity. What a blindness. What a stupidity.
What a futility. What an insensitivity.
>Ray