IUBio

Model of the Brain?

Marcus Metzler marcus.metzler at schunter.etc.tu-bs.de
Sat Oct 26 06:02:08 EST 2002


"Arthur T. Murray" wrote:

> http://www.scn.org/~mentifex/progman.html is the chapters of a new
> AI textbook based on modeling the brian in accordance with
> http://www.scn.org/~mentifex/theory5.html -- a Theory of Mind.
>

Sorry for intruding unasked (and maybe neither scientifically and proper
English using all the time), but after reading through those two pages I
just want to add some ideas around those.
But I think those models need some more input to enhance them.

<cut from first http site>
For example, Sensorium is called before Think,
so that the AI Mind may first receive sensory input and
then think about its sensory input from the outside world.
The stub of the Emotion module is placed after
Sensorium but before Think so that the AI Mind
may experience an emotional reaction to its sensory input
and then think thoughts that include an emotional component.
The stub of Volition (free will) comes after Think
because thinking informs and creates the free will.
<cut end>

Even though this hierarchy looks logical it might not be. And the
counterargument is probably just philosophical.
Two examples (though it may look farfetched - there are like endless
more examples resembling the same idea, I just chose them this way to
show the scale of the inlying problem)

1. Taking a simple and single H-atom it will - with a high probability -
link up with a second H-atom to form a H2-molecule.
The question would be - does the 'H' get 'sensory input' and then
'think/decide' a 'will' to form a H2-molecule - or does the 'will' to do
so exist before it gets the 'sensory input'. Even though it's hard to
think 'H'-atoms might have a 'will' they at least have a 'goal' or
something that 'drives' them to link up with other atoms.
Describing the 'H'-atoms 'will' would be easy (but probably not complete
but at least for 99% of possible cases) - its 'will' is to achieve the
most stable condition possible. So linking up with the other 'H'-atom
will achieve it.

2. Taking a human - his 'sensory input' would be 'seeing food'; does he
consume the food because he 'sees' it - or does the prior 'will to
survive' act on the 'sensory input'? One shouldn't easily put that
question aside, because the 'will' to survive (and the ability to do
this) is the oldest (if not the only) concept within any lifeform - any
lifeform that doesn't comply with this will discontinue - not only
single lifeforms but species wide. So even trying to talk one out of
that arguement with evolution and experience - going back to the first
'lifeforms' this argument wont work.

So based upon this examples it might be needed to adjust the hierarchy
as follows:

1. Will to do 'something' (wheras 'something' might be near impossible
to describe it 100%, but probably 99% might suffice)
2. Sensory Input
3. Thought/Decision
4. Motorium
5. Goto 1.

So this small routine could be used to 'simulate' the 'H'-atom - but
this routine is missing one big part that is needed, it could be called
'evolution'.

So to enhance the model there are more steps needed (because we need to
have a 'workaround' of that missing 1% in the original 'will' - assuming
that the whole existance of just so simple 'units' like atom-'wills'
resulted in something complex like the 'human mind' - or even so called
'consciousness').

0.1. Will to do 'something' (wheras 'something' might be near impossible
to describe it 100%, but probably 99% might suffice)
0.2. Sensory Input
0.3. Thought/Decision
0.4. Motorium
--1.1 Sensory Input
--1.2 Thought/Decision (Does the Sensory Input 1.1 resulting of Motorium
0.4 comply with Will 0.1?)
--1.3 Set Will 1.3
--1.4 Sensory Input
--1.5 Thought/Decision (based on Will 1.3)
--1.6 Motorium
----2.1 Sensory Input
----2.2 Thought/Decision (Does the Sensory Input 2.1 resulting of
Motorium 1.6 comply with Will 0.1?)
----2.3 Set Will 2.3
----2.4 Sensory Input
----2.5 Thought/Decision (based on Will 2.3)
----2.6 Motorium
and so on...


The second page 'Theory of Mind' describes a kind of mechanism that
would be needed to make an AI I would say - but it is not usable for
describing a 'human' mind in most ways. To start with - even though a
'new' brain isn't blank(tabula rasa) - it just contains information that
could be described best as 'random' (but even that is not exactly!).
Also the pure 'logical' nodes the theory uses dont work like the 'real'
ones - so even if that theory proposes only linear information flow,
orthogonal linkup between different node types and even permanent memory
from the past to the now - this might be useful to 'design' an AI - it
can't be transfered to a BI (biological intelligence).
So again - even if this 'Theory of Mind' coveres like 99% of the whole
'thinking' process I would say it is missing that one last process to
gain 'consciousness' (or at least the idea of thinking oneself as
'conscious')

Please don't get me wrong - I'm in no way against creating a 'conscious'
AI - in fact I think solving that 'puzzle' might be a needed step for
evolution taking the next step, it is just - I have a hard time exactly
formulating exactly where (of my point of view) the 'hardest' problems
lie to find a solution to the problem(s) (be it creating an AI or
defining what 'consciousness' might be).

So bye for now...

murx (Marcus Metzler)




More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net