In article <379a839b.2025652 at news.demon.co.uk>, malcolm at pigsty.demon.co.uk (Malcolm McMahon) wrote:
>On Sat, 24 Jul 1999 00:22:59 GMT, Bloxy's at hotmail.com (Bloxy's) wrote:
>>In article <379909bd.863792 at news.demon.co.uk>, malcolm at pigsty.demon.co.uk> (Malcolm McMahon) wrote:
>>>On Fri, 23 Jul 1999 01:15:01 GMT, Bloxy's at hotmail.com (Bloxy's) wrote:
>>>>In article <37ac6f14.44643483 at news.demon.co.uk>, malcolm at pigsty.demon.co.uk>>> (Malcolm McMahon) wrote:
>>>>>On Thu, 22 Jul 1999 13:30:24 -0400, "Ken Collins" <KPaulC at email.msn.com>
>>>>>wrote:
>>>>>>it's not 'AI' if it cannot direct its own learning in creative ways... as
>>>>>>soon as the machines are imbued with such, they're on their own.
>>>>>Learning, like all inteligent activity, is goal directed. And we set the
>>>>>goals.
>>>>Pure illusion.
>>>Expand a little. You don't think learning is goal directect. So what
>>>actually motivates it?
>>"Goal directed" interpretation is ONLY applicable
>>to the most rudimentary and mechanical level of KNOWN.
>Not at all. You're taking far too narrow a view of what can constitute a
>goal.
What is goal?
> "Serve mankind" can be a goal.
ANYTHING CAN be a goal.
But "serve a mankind" is one of the biggest lies
you can invent.
You can not serve "mankind".
You can only serve a particular class, a particular interest,
and even there, depending on the system where you come from,
you will be serving a different class as your beliefs are
different.
Those pathological liars, claming to "serve a mankind"
first need to learn to serve themselves.
There is no need for this utopia scam,
which is used as a cover for something entirely different.
You do what you can to be a human and be sensitive and
considerate to the needs and opinions of those, at least
around you, instead of throwing these nicely packaged
lies of general nature into the air.
> "Maximise the replication of genes
>like those you have in your body" can be a goal.
And that is purely mechanical operation.
You have to show the roots of this maximization in anything
beyond purely mechanical level.
> The problem is that
>most of the goals we have are so widely shared amongst humans (and to a
>lessor extent with other animals) most of us simply can't imagine how it
>could be any other way. And, to my mind, that ignorance is a risk factor
>when it comes to AI.
>When we first came up with the idea of robots (The basic notion goes
>back _way_ before the industrial age) we assumed that they would be
>mechanical (earlier magically activate) men. They'd be essentially
>copies of human beings able to do human type taske.
>Real robots aren't like that. If you want a robot to put gizmo's
>together on a production line most of the human worker is more than
>uneccessary, it's a pain to have arround. All you want is an arm and
>maybe some simple senses. You could make a humanoid robot to do the job
>but it would do it less well than that arm.
>It's the same as the way that the early pioneers of aviation assumed
>that what they had to build was a mechanical bird.
>We're thinking the same way, at the moment, about AI. We think that a
>"full" AI has to be a replica of the human brain, complete with all the
>emotional responses that our genes put there for their own benefit.
>(These are the "goals" that drive human beings).
Nope. First of all, we completely ignored the issue of emotion
and thus failed to prove that the mechanical "intelligence",
we created, can be proven to be a valid approach,
as we took the principles, guiding the real biological intelligence,
cut those, we can not "explain" at the moment, out,
and then make a claim that you have intelligence.
False assumtion, based on extremely limited set of the equivalent
of monkey logic.
>Yet real AI applications aren't at all like that. We adapt theories
>about ways this or that function of the animal nervous system is
>peformed and try to make them work in silicone. Just like with real
>robots we abstract and replicate just the functions we need for the task
>at hand.
But you took the whole thing, cut things out, and you don't even
know which things are more significant, those that you kept,
or those that you cut out.
The real biological intelligence can not be proven to even
begin to make sense if you remove the emotional aspects,
love, joy, playfulness, intuition and purpose.
>Say you want to build a smart car. The last thing you want is for it to
>be, say, pondering on the problems of physics when it should be watching
>for pedestrians. Only an idiot would put in the facilities to function
>on that level of abstraction. They aren't just unnecessary, they are
>actually counter-productive.
Well, you got NOTHING on your hands,
but delusions so far.
The same old, the same old.
Fatalistic materialism is BOUND to result.
>So you want to produce an AI to do abstract research, for example. You
>give it a goal like "look for new scientific truths and tell us about
>them.",
Well, you'll get screwed up so bad in this excersize that
you will not be able to guarantee that your machine will
not deadlock withing the first few moments of operation.
First of all, you need to reconcile a notion of "scientific truth".
Then, you'd have to find a principle, according to which,
the "new" "truths" can be "found".
Then you need to find at least logical impetus for your
"artificial intelligence" gadget to "tell you about it".
> or "Find new technoligies that will benefit mankind".
Pure blaf.