IUBio

First letter of Oz to the NG

Malcolm McMahon malcolm at pigsty.demon.co.uk
Sun Jul 25 06:40:36 EST 1999


On Sat, 24 Jul 1999 21:55:42 GMT, Bloxy's at hotmail.com (Bloxy's) wrote:

>
>What is goal?

Take anything you do. Ask yourself why you did that. The answere will
usually be because it advances something you want. Now ask yourself why
you want that thing if there's a (truthful) answer then that thing's a
sub-goal. In that case ask yourself why the thing that that sub-goal
works towards is something you want. Repeat this loop until you have no
answer. What you have left is a primary goal.

>
>> "Serve mankind" can be a goal.
>
>ANYTHING CAN be a goal.
>But "serve a mankind" is one of the biggest lies
>you can invent.
>You can not serve "mankind".
>You can only serve a particular class, a particular interest,
>and even there, depending on the system where you come from,
>you will be serving a different class as your beliefs are
>different.
>

It can certainly be a lie but I think there have been many people who
have had "serve mankind" as a goal. They may, or may not succeed but
that doesn't remove it as a goal.

>
>You do what you can to be a human and be sensitive and
>considerate to the needs and opinions of those, at least
>around you, instead of throwing these nicely packaged
>lies of general nature into the air.
>

That is serving mankind, albeit on a small scale.

>> "Maximise the replication of genes
>>like those you have in your body" can be a goal.
>
>And that is purely mechanical operation.
>You have to show the roots of this maximization in anything
>beyond purely mechanical level.
>

It's an imensely complex goal creating an enourmous variety of
behaviours, both selfish and altruistic. I suggest you read some
neo-darwinist literature. Everyone arround us shares that majority of
our genes.

>Nope. First of all, we completely ignored the issue of emotion
>and thus failed to prove that the mechanical "intelligence",
>we created, can be proven to be a valid approach,

Don't you get it? Emmotional values are just a way of describing our
external, irrational goals.

No, we haven't ignored emotion. When an artificial neural net in
training gets a reward update that's emotion. It's a communication from
the setter of goals.

Emotion is the easy bit. The hardest part, actually, is the enormout
shared database of experience we call "common sense".


>as we took the principles, guiding the real biological intelligence,
>cut those, we can not "explain" at the moment, out,
>and then make a claim that you have intelligence.
>

It's not that hard to explain, it's just that a lot of people, l, would
rather not listen to the explanations.

>
>But you took the whole thing, cut things out, and you don't even
>know which things are more significant, those that you kept,
>or those that you cut out.
>

No, you don't start with the whole thing. You start with the bits that
do things which ressemble the task at hand.

>The real biological intelligence can not be proven to even
>begin to make sense if you remove the emotional aspects,
>love, joy, playfulness, intuition and purpose.
>

No, indeed, which is why, I'm taking the origins of these things, our
irrational goals, as a basis.

Without these things, these drives or goals, inteligence would do
nothing.

But there's nothing ineffable about them.


>
>Well, you got NOTHING on your hands,
>but delusions so far.
>The same old, the same old.
>Fatalistic materialism is BOUND to result.
>

So tell me, how do you build a machine _without_ taking a materialistic
approach?

>
>First of all, you need to reconcile a notion of "scientific truth".
>Then, you'd have to find a principle, according to which,
>the "new" "truths" can be "found".
>Then you need to find at least logical impetus for your
>"artificial intelligence" gadget to "tell you about it".
>

Obviously I'm not advocating just telling it the words without giving a
context of definitions to go with them.

>
>You can not even agree on environmental issues.
>Nor can you agree on all other most important issues.
>

So you advocate a magnificent course of philisophical innaction?

>> (Or, less
>>altrustically "Find ways that company X can improve it's medium term
>>share value").
>
>Yes, now you are talking the real talk.
>All the other jazz you got is just utterly uninteresting.
>
>And an example you provided is EXACTLY the one,
>that will destroy your entire structure, you hoped to build
>here with your pseudo arguments.
>

Why? more so than the existing competative structure (remember company Y
will have one too).

>because the only ultimate "law" you got is this:
>
>Money = god,
>and god = money.
>
>And the only thing you will be able to create at the end
>is a giant sucking machine.

Obviously you're closer to understanding than you pretend if what I
write gets you so angry.




More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net