IUBio

First letter of Oz to the NG

Malcolm McMahon malcolm at pigsty.demon.co.uk
Sat Jul 24 05:22:29 EST 1999


On Sat, 24 Jul 1999 00:22:59 GMT, Bloxy's at hotmail.com (Bloxy's) wrote:

>In article <379909bd.863792 at news.demon.co.uk>, malcolm at pigsty.demon.co.uk (Malcolm McMahon) wrote:
>>On Fri, 23 Jul 1999 01:15:01 GMT, Bloxy's at hotmail.com (Bloxy's) wrote:
>
>>>In article <37ac6f14.44643483 at news.demon.co.uk>, malcolm at pigsty.demon.co.uk
>> (Malcolm McMahon) wrote:
>>>>On Thu, 22 Jul 1999 13:30:24 -0400, "Ken Collins" <KPaulC at email.msn.com>
>>>>wrote:
>
>>>>>it's not 'AI' if it cannot direct its own learning in creative ways... as
>>>>>soon as the machines are imbued with such, they're on their own.
>
>>>>Learning, like all inteligent activity, is goal directed. And we set the
>>>>goals.
>
>>>Pure illusion.
>
>>Expand a little. You don't think learning is goal directect. So what
>>actually motivates it?
>
>"Goal directed" interpretation is ONLY applicable
>to the most rudimentary and mechanical level of KNOWN.

Not at all. You're taking far too narrow a view of what can constitute a
goal. "Serve mankind" can be a goal. "Maximise the replication of genes
like those you have in your body" can be a goal. The problem is that
most of the goals we have are so widely shared amongst humans (and to a
lessor extent with other animals) most of us simply can't imagine how it
could be any other way. And, to my mind, that ignorance is a risk factor
when it comes to AI.

When we first came up with the idea of robots (The basic notion goes
back _way_ before the industrial age) we assumed that they would be
mechanical (earlier magically activate) men. They'd be essentially
copies of human beings able to do human type taske.

Real robots aren't like that. If you want a robot to put gizmo's
together on a production line most of the human worker is more than
uneccessary, it's a pain to have arround. All you want is an arm and
maybe some simple senses. You could make a humanoid robot to do the job
but it would do it less well than that arm.

It's the same as the way that the early pioneers of aviation assumed
that what they had to build was a mechanical bird.

We're thinking the same way, at the moment, about AI. We think that a
"full" AI has to be a replica of the human brain, complete with all the
emotional responses that our genes put there for their own benefit.
(These are the "goals" that drive human beings).

Yet real AI applications aren't at all like that. We adapt theories
about ways this or that function of the animal nervous system is
peformed and try to make them work in silicone. Just like with real
robots we abstract and replicate just the functions we need for the task
at hand.

Say you want to build a smart car. The last thing you want is for it to
be, say, pondering on the problems of physics when it should be watching
for pedestrians. Only an idiot would put in the facilities to function
on that level of abstraction. They aren't just unnecessary, they are
actually counter-productive.

So you want to produce an AI to do abstract research, for example. You
give it a goal like "look for new scientific truths and tell us about
them.", or "Find new technoligies that will benefit mankind". (Or, less
altrustically "Find ways that company X can improve it's medium term
share value"). You don't, indeed you can't tell it "do what you like"
because until actual goals are supplied there are no criteria for liking
or not liking.




More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net