On 10/27/02 8:16 AM, darth_versive wrote:
> Marcus Metzler <marcus.metzler at schunter.etc.tu-bs.de> wrote in message news:<3DBA7630.98502C11 at schunter.etc.tu-bs.de>...
>>>"Arthur T. Murray" wrote:
>>>>>>>http://www.scn.org/~mentifex/progman.html is the chapters of a new
>>>AI textbook based on modeling the brian in accordance with
>>>http://www.scn.org/~mentifex/theory5.html -- a Theory of Mind.
>>>>>>>Sorry for intruding unasked (and maybe neither scientifically and proper
>>English using all the time), but after reading through those two pages I
>>just want to add some ideas around those.
>>But I think those models need some more input to enhance them.
>>>><cut from first http site>
>>For example, Sensorium is called before Think,
>>so that the AI Mind may first receive sensory input and
>>then think about its sensory input from the outside world.
>>The stub of the Emotion module is placed after
>>Sensorium but before Think so that the AI Mind
>>may experience an emotional reaction to its sensory input
>>and then think thoughts that include an emotional component.
>>The stub of Volition (free will) comes after Think
>>because thinking informs and creates the free will.
>><cut end>
>>>>Even though this hierarchy looks logical it might not be. And the
>>counterargument is probably just philosophical.
>>Two examples (though it may look farfetched - there are like endless
>>more examples resembling the same idea, I just chose them this way to
>>show the scale of the inlying problem)
>>>>1. Taking a simple and single H-atom it will - with a high probability -
>>link up with a second H-atom to form a H2-molecule.
>>The question would be - does the 'H' get 'sensory input' and then
>>'think/decide' a 'will' to form a H2-molecule - or does the 'will' to do
>>so exist before it gets the 'sensory input'. Even though it's hard to
>>think 'H'-atoms might have a 'will' they at least have a 'goal' or
>>something that 'drives' them to link up with other atoms.
>>Describing the 'H'-atoms 'will' would be easy (but probably not complete
>>but at least for 99% of possible cases) - its 'will' is to achieve the
>>most stable condition possible. So linking up with the other 'H'-atom
>>will achieve it.
>>>>2. Taking a human - his 'sensory input' would be 'seeing food'; does he
>>consume the food because he 'sees' it - or does the prior 'will to
>>survive' act on the 'sensory input'? One shouldn't easily put that
>>question aside, because the 'will' to survive (and the ability to do
>>this) is the oldest (if not the only) concept within any lifeform - any
>>lifeform that doesn't comply with this will discontinue - not only
>>single lifeforms but species wide. So even trying to talk one out of
>>that arguement with evolution and experience - going back to the first
>>'lifeforms' this argument wont work.
>>>>So based upon this examples it might be needed to adjust the hierarchy
>>as follows:
>>>>1. Will to do 'something' (wheras 'something' might be near impossible
>>to describe it 100%, but probably 99% might suffice)
>>2. Sensory Input
>>3. Thought/Decision
>>4. Motorium
>>5. Goto 1.
>>>>So this small routine could be used to 'simulate' the 'H'-atom - but
>>this routine is missing one big part that is needed, it could be called
>>'evolution'.
>>>>So to enhance the model there are more steps needed (because we need to
>>have a 'workaround' of that missing 1% in the original 'will' - assuming
>>that the whole existance of just so simple 'units' like atom-'wills'
>>resulted in something complex like the 'human mind' - or even so called
>>'consciousness').
>>>>0.1. Will to do 'something' (wheras 'something' might be near impossible
>>to describe it 100%, but probably 99% might suffice)
>>0.2. Sensory Input
>>0.3. Thought/Decision
>>0.4. Motorium
>>--1.1 Sensory Input
>>--1.2 Thought/Decision (Does the Sensory Input 1.1 resulting of Motorium
>>0.4 comply with Will 0.1?)
>>--1.3 Set Will 1.3
>>--1.4 Sensory Input
>>--1.5 Thought/Decision (based on Will 1.3)
>>--1.6 Motorium
>>----2.1 Sensory Input
>>----2.2 Thought/Decision (Does the Sensory Input 2.1 resulting of
>>Motorium 1.6 comply with Will 0.1?)
>>----2.3 Set Will 2.3
>>----2.4 Sensory Input
>>----2.5 Thought/Decision (based on Will 2.3)
>>----2.6 Motorium
>>and so on...
>>>>>>The second page 'Theory of Mind' describes a kind of mechanism that
>>would be needed to make an AI I would say - but it is not usable for
>>describing a 'human' mind in most ways. To start with - even though a
>>'new' brain isn't blank(tabula rasa) - it just contains information that
>>could be described best as 'random' (but even that is not exactly!).
>>Also the pure 'logical' nodes the theory uses dont work like the 'real'
>>ones - so even if that theory proposes only linear information flow,
>>orthogonal linkup between different node types and even permanent memory
>>from the past to the now - this might be useful to 'design' an AI - it
>>can't be transfered to a BI (biological intelligence).
>>So again - even if this 'Theory of Mind' coveres like 99% of the whole
>>'thinking' process I would say it is missing that one last process to
>>gain 'consciousness' (or at least the idea of thinking oneself as
>>'conscious')
>>>>Please don't get me wrong - I'm in no way against creating a 'conscious'
>>AI - in fact I think solving that 'puzzle' might be a needed step for
>>evolution taking the next step, it is just - I have a hard time exactly
>>formulating exactly where (of my point of view) the 'hardest' problems
>>lie to find a solution to the problem(s) (be it creating an AI or
>>defining what 'consciousness' might be).
>>>>So bye for now...
>>>>murx (Marcus Metzler)
>>> I can't imagine how AI could possibly model the human capacity for
> higher-level thought (conceptual frameworks, cognitive schema, etc.).
The limits of your imagination do not an argument make.
> Psychologists today can't even do it for *human* cognition, where they
> have all the "working models" available to study that they would need.
The availability of the same "working models" did not
enable physicians to produce viable cures for many
centuries. "Availability" isn't sufficient.
> I think AI that models human consciousness is science fiction.
Some science fiction has come true.
> The
> most we can hope for
Bald assertion.
> is AI that is structured to do specific limited
> tasks, and for which the skill level for the performance of these
> tasks can possibly be influenced somewhat by environmental input, but
> all this would have to be designed in by those humans with "true"
> consciousness.
At least until artil artificial designers are available.
--
<J Q B>