> There was nothing in here so I thought I'd write something so that
> this file doesn't feel lonely.
Indeed, what a shame that there is no discussion on such an
interesting topic! How about a discussion of the computational
aspects of neural mechanisms? There are three elements of neural
computation that seem to differ from artificial computation-
distributiveness, analog, and feedback.
Distributiveness is the way that neurons tend to branch out so
abundantly, receiving input from and sending output to thousands of
other cells. This is very different from computer systems, but we are
beginning to understand the significance of parallel distributed
processing, and how it affords fault tolerance and robust
representations.
Analog computation: the frequency of spiking is an analog quantity,
despite the fact that the individual spikes are essentially binary in
nature, and of course non-spiking neurons are completely analog.
Analog computation has been largely abandoned by man after a brief
spate of popularity in the 50's and 60's, so why does the brain use
it? The reason we abandoned it was because of it's complexity and
chaotic tendancies. The reason the brain uses it is because of it's
complexity and chaotic tendancies. I use, of course, the mathematical
meaning of the word chaos- i.e. any signal that is neither periodic
nor completely random.
Feedback: For years it was a mystery why the pathway from the lateral
geniculate nucleus (first stage in the visual pathway) to the primary
visual cortex (second stage) is actually SMALLER than a reciprocal
pathway from visual cortex BACK to the lateral geniculate! Indeed
throughout the brain we see multiple feedback pathways. What is the
significance of these backwards connections? Grossberg proposes that
the feedback allows for a resonant matching between lower level and
higher level representations. At each level of representation (within
each neural layer) there are certain computational constraints that
are expressed within that layer by excitatory or inhibitory
interactions. For simple cells in visual cortex, for instance, an
edge found at one location at, say, 30 degrees, is inconsistant with
an edge found at that SAME location but a different orientation, say
60 degrees. The cells that represent these conflicting
representations experience a mutually inhibitory relationship so that
only one can remain active even when both receive some stimulation.
At a higher level of computation, complex cells of adjacent locations
boost each other if they find themselves along the same line, i.e.
they detect two simple edges that are both parallel and aligned with
one another. The constraints felt individually at each level
(competition in the lower level, and cooperation at the higher level)
must interact with each other in a large feedback loop (Grossberg's
cooperative / competitive loop) so that the multiple constraints felt
at each level are optimally satisfied by the whole system.
It is kind of like a system of balls connected by springs, where each
spring represents a spatial constraint linking two balls. Short
springs between nearby balls enforce local constraints, while long
springs between whole groups of balls enforce more global constraints.
Given certain inputs (some balls clamped into specific positions) the
rest of the network will wiggle and jiggle until it finally relaxes
into a global stability where the total energy of the system (sum of
tensions on the springs) has reached a minimum.
--
(O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O)
(O)((O))((( slehar at park.bu.edu )))((O))(O)
(O)((O))((( Steve Lehar Boston University Boston MA )))((O))(O)
(O)((O))((( (617) 424-7035 (H) (617) 353-6741 (W) )))((O))(O)
(O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O)