IUBio

First letter of Oz to the NG

Ken Collins KPaulC at email.msn.com
Thu Jul 22 20:41:54 EST 1999


i see you "got it", by the time you reached the bottom of the msg.

ken collins

Bloxy's wrote in message <7n8flo$h0k$1 at its.hooked.net>...
>In article <ekcFnfG1#GA.153 at cpmsnbbsa05>, "Ken Collins"
<KPaulC at email.msn.com> wrote:
>>it's not 'AI' if it cannot direct its own learning in creative ways... as
>>soon as the machines are imbued with such, they're on their own.
>
>>my view is that any True 'AI' should not be allowed mobility,
>
>It does not need "mobility" to destroy you.
>Bring down your satallites, confuse communications,
>provide false data, triggering the events,
>and on and on and on.
>
>> but there's
>>'difficulty' even in turning 'AI' loose in stationary machines that have
>>access to standard networks.
>
>>of course, 'AI' 'motivation' can be tailored, but in doing so, it ceases
to
>>be 'AI' and is just another mechanism lacking Free Will.
>
>>K. P. Collins (ken)
>
>>Malcolm McMahon wrote in message <379dd387.4822153 at news.demon.co.uk>...
>>>On Thu, 22 Jul 1999 07:08:05 GMT, Bloxy's at hotmail.com (Bloxy's) wrote:
>
>>>>The first strike will come from a machine, most likely,
>>>>because first of all, the man had some interest to build
>>>>that machine. Logically, there is no level where the man
>>>>will stop.
>
>>>This idea that AI will compete with and eventually surplant us assumes
>>>that machine will have similar motivations to people. There's simply no
>>>reason why we would build machines that way. When we build AIs we'll
>>>build the to serve us and that will be their fundamental goal. Even when
>>>AIs build other AIs they'll do it not out of some biological
>>>reproductive drive but in our service. If someone is perverse and stupid
>>>enought to build AIs with a copy of our biological motivations then,
>>>hopefully, there will be enough AIs produced by sensible people to
>>>defend us.
>
>>>Some people think that building AIs as "slaves" is unethical but the
>>>truth is that _whatever_ fundamental motivations they end up with will
>>>be down to us and the only way that one set of motivations is ethically
>>>superior to another is in the way it affects us. If they destroy us it
>>>will because we have effectively told them to.





More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net