I made a request for information on how neurones might organise themselves
in learning a short while ago. Amongst the interesting responses to the
original post (see below) were some references to long term potentiation
etc. I was recommended to look up in a computer based medical index the
terms LTD and LTP. However, the only ones available on the Net weren't;
the software either would not work for web searches or would only work for
local or licenced users. As an AI worker, it is unlikely and uneconomic
for my department to get access to Medlink (I think that was its name).
Can someone who knows about these concepts or can do a search for
references please help me. I include the original post for information:
> Hi! As a Cognitive Scientist/AI/Alife/Neural Network sort of person
> (well
> that's labelled him), I'd like to avoid reinventing Mother Nature's
> wheel.
> I'd like your suggestions for sources, or directo explanations of the
> following concepts in Brain structure and development:
>> What are the processes involved in the early stages of brain
> development,
> which are essentially the hard-wiring of nerve connections between
> different specialised areas of the brain (e.g. eye nerves to vision
> centres)?
>> How is this achieved? More importantly, how did this pre-determined
> structure evolve? Is it genetic, or something else?
>> What is the mechanism involved in determining nereve connections (in
> learning and memory, if I'm not being too naive about our current
> knowledge of how the brain works)? What chemical or electrical signal
> encourages a particular neurone to want to send feelers along to
> g=connecto up with another? What is it that makes one nerve more
> 'interesting' than another? Is there such a process, or is it all part
> of
> the hard-wiring phase?
>> I'm interested because my guts tell me that dynamic neural networks
> (DNNs)
> are potentially more powerful than the static, forward flow kind that is
> currently in favour. A hunch tells me that the 'learning' and
> modification
> of such a network of DNNs should be self-modifying rather than having
> modifications imposed upon them by some outside means, as happens in
> Neural Nets at present. I'm looking for how the structures evolve in the
> first place and how they modify themselves (chemical feedback?) during
> maintenance. My ultimate aim is to build a bloody good model of learning
> and symbol fixing using a fairly low level representation mechanism (an
> ambitious objective I know). Thank you for your time.
Yours Mike
--
Email: mreddy at comp.glamorgan.ac.uk CU-Seeme: 193.63.130.40 (On request)
Web: http://www.comp.glam.ac.uk/pages/staff/mreddy/mreddy.html
Snail: J228, Dept. of Computer Studies, University of Glamorgan, Pontypridd,
Mid Glamorgan. CF37 1DL Wales, UK. +44 1443 482 240 Fax: +44 1443 482 715
Yours Mike Reddy
--
Email: mreddy at comp.glamorgan.ac.uk CU-Seeme: 193.63.130.40 (On Request)
Web: http://www.comp.glam.ac.uk/pages/staff/mreddy/mreddy.html
Snail: J228, Dept. of Computer Studies, University of Glamorgan, Pontypridd, Mid Glamorgan. CF37 1DL Wales, UK. +44 1443 482 240 Fax: +44 1443 482 715