mats_trash at hotmail.com (mat) wrote in message news:<43525ce3.0202040145.410c6c40 at posting.google.com>...
> "yan king yin" <y.k.y@(dont spam)lycos.com> wrote in message news:<Te578.7202$Lv.819466 at news.xtra.co.nz>...
> >
>I haven't read much on NNs (or SOMs)
> but as far as i can tell 'supervised' and 'unsupervised' do not really
> hold for SOMs. You start of with initial values for all nodes and
> then as training data is input the nodes 'compete' for the data and
> are modified by it. There is no specific map that is to be achieved,
> the success of the SOM is judged by whether it can correctly classify
> a set of test data.
I think this is right. From my reading, SOMs do not require
"supervision" per se. However, they do require that specific "learning
rule" or algorithm for weight adjustment be provided before training.
Finding the best rule would be a form of supervision. However, if
biological NNs work like SOMs, then this supervision would have been
provided by billions of years of natural selection for the best set of
rules.
>> As to iteration, there are many feedback mechanisms evident in the
> nervous system
I agree with this too. For example, one area in the brain thought to
be critical for consolidation of short-term memory is the hippocampal
formation and associated entorhinal cortex. The iterative structure
looks like this:
1) Entorhinal cortex (EC) gets pre-processed "snapshots" of the
sensory world from other parts of cortex.
2) EC sends an output of its own processing to the hippocampus.
3) Hippocampus does some processing, and sends its output back to EC.
4) What happens next is not entirely clear, but one good possibility
is that EC "compares" the before and after versions of the snapshot,
then sends some sort of error signal to the hippocampus. Tis error
signal could be used to adjust connection weights prior to the next
round of processing.
Interestingly, the hippocampus has -two- areas with the necessary
recurrent synaptic circuitry to perform an autoassociative memory
function: dentate gyrus -and- CA3. These two regions both have a
(relatively) high degree of recurrent excitation, and these excitatory
synapses are plastic (their weights are adjustable depending on how
they are being used). Further, these two regions are -reciprocally-
coupled to each other by excitatory synapses. This layout sounds a lot
like the "Two NN" system Mat described in an earlier post.
Also interestingly, SOMs were developed by the same guy that developed
the theory of autoassociative networks, Teuvo Kohonen. Autoassociative
network theory is routinely used to model hippocampal memory
processing. However a few years ago, a guy named Dante Chialvo working
in the McNaughton lab at the time, modeled hippocampal place-field
development (a form of learning) using SOMs. Last time I checked, this
work had not been published, but I saw his Neuroscience poster.
The core of SOMs is that you adjust weights by comparing neighboring
units. Units that are "near" each other have their weights adjusted in
the same direction, often via a sort of winner-take-all mechanism.
Chialvo's idea was that local GABAergic interneurons in hippocampus
mediated the local weight adjustment by virtue of having a local
influence on neighboring pyramidal cells within a restricted spatial
range. In fact, he was able to train an SOM on spike data from
interneurons in behaving rats, and the SOM was then able to predict
the position of the rat to within a few centimeters.
Like I said, to my knowledge, this has not yet been published. I asked
for a preprint, and got a whole bunch of papers on other NN work he
did, but nothing on this particular study.
Cheers,
Matt
>> > ------------------------- Original Message -----------------------------
> > mat wrote:
> >
> > The memory functions of the brain are certainly impressive but not of
> > course not infallible. An interesting point is that we often seem to
> > know when we have recollected a name or a fact (for example)
> > incorrectly. By implication this must mean we either have the correct
> > memory somewhere with which to compare it (in which case why don't we
> > just recall that?) or that there is another error-checking system of
> > some sort. Just wondering if anyone had any ideas.
> >
> > One possible mechanism through which you get some of these properties
> > would be to use two or more neural network-type systems (particularly
> > the self-organizing map (SOM) paradigm). When a memory was to be
> > accessed then the 'request' would be sent through the first SOM. Some
> > output from this frist SOM would then be input to a second SOM. If
> > the output from the second and first matched then the memory would be
> > considered 'correct'. However, if not then the output of the second
> > SOM would again be input to the first SOM. Eventually you would end
> > up such that the output and input to each SOM was the same and this
> > would then be considered correct (of course it still might actually be
> > incorrect). If the outputs and inputs never converged then you would
> > have 'forgotten'. The process is similar to the iterative formulas
> > for finding equation roots where you use the output as subsequent
> > input until there is no change (to a defined degree of accuracy)