IUBio

Model of the Brain?

Arthur T. Murray uj797 at victoria.tc.ca
Sat Oct 26 09:10:32 EST 2002


Marcus Metzler <marcus.metzler at schunter.etc.tu-bs.de> wrote on 26 Oct 2002:
>
>"Arthur T. Murray" wrote:
>
>> http://www.scn.org/~mentifex/progman.html is the chapters of a new
>> AI textbook based on modeling the brain in accordance with
>> http://www.scn.org/~mentifex/theory5.html -- a Theory of Mind.
>>
>
> Sorry for intruding unasked

Menitfex/ATM:
On Usenet there is no "intruding unasked" and everyone is welcome.

>                              (and maybe neither scientifically
> and proper English using all the time), but after reading through
> those two pages I just want to add some ideas around those.
> But I think those models need some more input to enhance them.
>
> <cut from first http site>
> For example, Sensorium is called before Think,
> so that the AI Mind may first receive sensory input and
> then think about its sensory input from the outside world.
> The stub of the Emotion module is placed after
> Sensorium but before Think so that the AI Mind
> may experience an emotional reaction to its sensory input
> and then think thoughts that include an emotional component.
> The stub of Volition (free will) comes after Think
> because thinking informs and creates the free will.
> <cut end>
>
> Even though this hierarchy looks logical it might not be.

ATM:
It is more of a sequence (German "Reihenfolge") than a hierarchy,
because, until we have massively parallel ("maspar") programming
and maspar hardware, we must simulate maspar with sequential
programming in the John von Neuman tradition.

> And the counterargument is probably just philosophical.
> Two examples (though it may look farfetched - there are
> like endless more examples resembling the same idea, I just
> chose them this way to show the scale of the inlying problem)
>
> 1. Taking a simple and single H-atom it will - with a high
> probability - link up with a second H-atom to form a H2-molecule.
> The question would be - does the 'H' get 'sensory input' and
> then 'think/decide' a 'will' to form a H2-molecule - or does
> the 'will' to do so exist before it gets the 'sensory input'.
ATM:
We may perhaps think of an AI as abstracting information from its
environment and thus forming a will "orthogonal" to whatever
fundamental events are occuring in the life of the AI organism.
An atom, I don't think, does not "enjoy the luxury" of doing
anything but follow the fundamental laws of Nature.  But
IANAP -- I am not a physicist (like "I am not a lawyer").

> Even though it's hard to think 'H'-atoms might have a 'will'
> they at least have a 'goal' or something that 'drives' them
> to link up with other atoms.  Describing the 'H'-atoms 'will'
> would be easy (but probably not complete but at least for 99%
> of possible cases) - its 'will' is to achieve the most stable
> condition possible. So linking up with the other 'H'-atom
> will achieve it.
ATM:
So the consition of the two "H" atoms changes over time
as the universe runs down towards its "Waermetod" ("heat death" --
a word originally from the German, no?), but the linkage or
fusion occurs willy-nilly, not with an act of will,
except perhaps the "divine will" of a Creator.
>
> 2. Taking a human - his 'sensory input' would be 'seeing food';
> does he consume the food because he 'sees' it - or does the prior
> 'will to survive' act on the 'sensory input'?
ATM:
If the 'will to survive' is a fundamental drive, it may perhaps
be a fundamental component in the computational algorithm for
linking (queueing) up motor options with rational, semi-rational,
and downright irrational goals.

http://mind.sourceforge.net/volition.html is the Mentifex URL for will.

> One shouldn't easily put that question aside, because the 'will'
> to survive (and the ability to do this) is the oldest (if not
> the only) concept within any lifeform - any lifeform that doesn't
> comply with this will discontinue - not only single lifeforms
> but species wide. So even trying to talk one out of that arguement
> with evolution and experience - going back to the first 'lifeforms'
> this argument wont work.
ATM:
Nevertheless a rational mind may subconsciously assign "weights"
or levels of value to all its drives and all its plans, thus
forming part of the computational algorithm for will.

>
> So based upon this examples it might be needed to adjust the
> hierarchy as follows:
>
> 1. Will to do 'something' (wheras 'something' might be near
> impossible to describe it 100%, but probably 99% might suffice)
> 2. Sensory Input
> 3. Thought/Decision
> 4. Motorium
> 5. Goto 1.
ATM:
If Edsger Dijkstra had not died a few months ago, he might
object to that final "goto" statement :)
>
> So this small routine could be used to 'simulate' the 'H'-atom -
> but this routine is missing one big part that is needed,
> it could be called 'evolution'.
ATM:
Likewise these models of the brain-mind are still evolving,
and you, kind Sir, or any other discussant here might well
do a write-up of one's own ideas on their theory of mind and
post the ideas on a Web page for reference during these discussions.

>
> So to enhance the model there are more steps needed (because
> we need to have a 'workaround' of that missing 1% in the original
> 'will' - assuming that the whole existance of just so simple
> 'units' like atom-'wills' resulted in something complex like
> the 'human mind' - or even so called 'consciousness').
ATM:
http://mind.sourceforge.net/conscius.html -- "Consciousness."
>
> 0.1. Will to do 'something' (wheras 'something' might be near
> impossible to describe it 100%, but probably 99% might suffice)
> 0.2. Sensory Input
> 0.3. Thought/Decision
> 0.4. Motorium
> --1.1 Sensory Input
> --1.2 Thought/Decision (Does the Sensory Input 1.1 resulting
> of Motorium 0.4 comply with Will 0.1?)

ATM:
If the "Will 0.1" is an instinctive drive, then it is not a
question of compliance but rather of causation, i.e., the
drive and the sensory-input-perception of a way to fulfill
the drive, constitute together some main ingredients into
the computational algorithm of arriving at a willed decision.

> --1.3 Set Will 1.3
> --1.4 Sensory Input
> --1.5 Thought/Decision (based on Will 1.3)
> --1.6 Motorium
> ----2.1 Sensory Input
> ----2.2 Thought/Decision (Does the Sensory Input 2.1 resulting of
> Motorium 1.6 comply with Will 0.1?)
> ----2.3 Set Will 2.3
> ----2.4 Sensory Input
> ----2.5 Thought/Decision (based on Will 2.3)
> ----2.6 Motorium
> and so on...
ATM:
Oh, so it's a cycle, using the anti-Dijkstra "goto" loop.
>
>
> The second page 'Theory of Mind' describes a kind of mechanism
> that would be needed to make an AI I would say - but it is not
> usable for describing a 'human' mind in most ways.
ATM:
Perhaps so, because the theory is so abstract, but we have to
start somewhere.  Meanwhile, almost every brain-mind diagram of
http://mind.sourceforge.net/progman.html is uniquely descriptive
of how the Mentifex AI theory is being turned into software.
The "hybrid" brain-mind diagrams mix software-modules and
massive cortical flows across the mindgrid -- all in a way
calculated to show how the AI Mind works, as best as possible.  

> To start with - even though a 'new' brain isn't blank
> (tabula rasa) - it just contains information that could
> be described best as 'random' (but even that is not exactly!).
ATM:
However much Nature imparts to the human brain before birth,
we may probably safely assume that it includes no experiences.

> Also the pure 'logical' nodes the theory uses dont work
> like the 'real' ones - so even if that theory proposes only
> linear information flow, orthogonal linkup between different
> node types and even permanent memory from the past to the now -
> this might be useful to 'design' an AI - it can't be transfered
> to a BI (biological intelligence).
ATM:
However, if we tried to use any other than "orthogonal"
information flow in a mindgrid, the situation would become
hopelessly snarled and messy.

> So again - even if this 'Theory of Mind' coveres like 99%
> of the whole 'thinking' process I would say it is missing
> that one last process to gain 'consciousness' (or at least
> the idea of thinking oneself as 'conscious')
ATM:
We have no argument (i.e., dispute) there :)
>
> Please don't get me wrong - I'm in no way against creating
> a 'conscious' AI - in fact I think solving that 'puzzle'
> might be a needed step for evolution taking the next step,
> it is just - I have a hard time exactly formulating exactly
> where (of my point of view) the 'hardest' problems lie to
> find a solution to the problem(s) (be it creating an AI or
> defining what 'consciousness' might be).
ATM:
http://www.seedai.e-mind.org lists and tracks AI projects.

http://mentifex.futureai.com may update the Mentifex AI textbook.
>
> So bye for now...
>
> murx (Marcus Metzler)

Thank you very much for the discussion, now open for other views.

Sincerely,

Arthur T. Murray  http://www.scn.org/~mentifex/ 



More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net