>> I haven't read much on NNs (or SOMs)
>> but as far as i can tell 'supervised' and 'unsupervised' do not really
>> hold for SOMs. You start of with initial values for all nodes and
>> then as training data is input the nodes 'compete' for the data and
>> are modified by it. There is no specific map that is to be achieved,
>> the success of the SOM is judged by whether it can correctly classify
>> a set of test data.
>> I think this is right. From my reading, SOMs do not require
> "supervision" per se. However, they do require that specific "learning
> rule" or algorithm for weight adjustment be provided before training.
> Finding the best rule would be a form of supervision. However, if
> biological NNs work like SOMs, then this supervision would have been
> provided by billions of years of natural selection for the best set of
> rules.
My books have just arrived and I have a better understanding of SOMs
now =) From what I read, SOMs extract the "principle components"
of a set of data that is presented to the network repeatedly over a
period of time (the training). The essence of this process is one of
*abstraction*. Upon presenting 10 cows, an SOM will respond with
the quintessential "cow", but is that what memory does? Memory is
more like, presenting a cow evokes the memory of a farm scene
in the past, which is more like an associative network and is
excitatory in nature (versus inhibitory as in winner-takes-all).
>> As to iteration, there are many feedback mechanisms evident in the
>> nervous system
>> I agree with this too. For example, one area in the brain thought to
> be critical for consolidation of short-term memory is the hippocampal
> formation and associated entorhinal cortex. The iterative structure
> looks like this:
You should be careful that consolidation is different from recall, and
the hippocampus is required for consolidation, but not recall. There
is also the additional complication that hippocampal lesions usually
result in a few years of retrograde amnesia as well.
> 1) Entorhinal cortex (EC) gets pre-processed "snapshots" of the
> sensory world from other parts of cortex.
>> 2) EC sends an output of its own processing to the hippocampus.
>> 3) Hippocampus does some processing, and sends its output back to EC.
>> 4) What happens next is not entirely clear, but one good possibility
> is that EC "compares" the before and after versions of the snapshot,
> then sends some sort of error signal to the hippocampus. Tis error
> signal could be used to adjust connection weights prior to the next
> round of processing.
I suppose you're talking about memory recall here, but hippocampus
is not involved in recall of long term memories. Though you may argue
that this is the recall mechanism of recent memories that is stored in the
hippocampus.
Another simplified schematic pathway is as follows: cortex -> EC ->
dentate gyrus -> CA1 -> CA3 -> Subiculum -> EC -> cortex, where
CA=hippocampus. So it looks like a loop and memory might pass through
this loop to be stored in the cortex. It is also possible that memory is
processed in other parts of the cortex and the hippocampus only
mediates the consolidation process.
I think the central question is in where and how is memory stored
in the cortex. Then we can combine it with the information about the
hippocampus and see how it mediates consolidation.
> Interestingly, the hippocampus has -two- areas with the necessary
> recurrent synaptic circuitry to perform an autoassociative memory
> function: dentate gyrus -and- CA3. These two regions both have a
> (relatively) high degree of recurrent excitation, and these excitatory
> synapses are plastic (their weights are adjustable depending on how
> they are being used). Further, these two regions are -reciprocally-
> coupled to each other by excitatory synapses. This layout sounds a lot
> like the "Two NN" system Mat described in an earlier post.
>> Also interestingly, SOMs were developed by the same guy that developed
> the theory of autoassociative networks, Teuvo Kohonen. Autoassociative
> network theory is routinely used to model hippocampal memory
> processing. However a few years ago, a guy named Dante Chialvo working
> in the McNaughton lab at the time, modeled hippocampal place-field
> development (a form of learning) using SOMs. Last time I checked, this
> work had not been published, but I saw his Neuroscience poster.
This sounds similar to the idea that the hippocampus forms a "cognitive
map" of the environment.
> The core of SOMs is that you adjust weights by comparing neighboring
> units. Units that are "near" each other have their weights adjusted in
> the same direction, often via a sort of winner-take-all mechanism.
> Chialvo's idea was that local GABAergic interneurons in hippocampus
> mediated the local weight adjustment by virtue of having a local
> influence on neighboring pyramidal cells within a restricted spatial
> range. In fact, he was able to train an SOM on spike data from
> interneurons in behaving rats, and the SOM was then able to predict
> the position of the rat to within a few centimeters.
>> Like I said, to my knowledge, this has not yet been published. I asked
> for a preprint, and got a whole bunch of papers on other NN work he
> did, but nothing on this particular study.
>> > ------------------------- Original Message -----------------------------
> > mat wrote:
> >
> > The memory functions of the brain are certainly impressive but not of
> > course not infallible. An interesting point is that we often seem to
> > know when we have recollected a name or a fact (for example)
> > incorrectly. By implication this must mean we either have the correct
> > memory somewhere with which to compare it (in which case why don't we
> > just recall that?) or that there is another error-checking system of
> > some sort. Just wondering if anyone had any ideas.
> >
> > One possible mechanism through which you get some of these properties
> > would be to use two or more neural network-type systems (particularly
> > the self-organizing map (SOM) paradigm). When a memory was to be
> > accessed then the 'request' would be sent through the first SOM. Some
> > output from this frist SOM would then be input to a second SOM. If
> > the output from the second and first matched then the memory would be
> > considered 'correct'. However, if not then the output of the second
> > SOM would again be input to the first SOM. Eventually you would end
> > up such that the output and input to each SOM was the same and this
> > would then be considered correct (of course it still might actually be
> > incorrect). If the outputs and inputs never converged then you would
> > have 'forgotten'. The process is similar to the iterative formulas
> > for finding equation roots where you use the output as subsequent
> > input until there is no change (to a defined degree of accuracy)