the nervous system is stateless... for a crude analogy, imagine a
leaky system that's structurally dependent upon being inflated by
bycicle pumps attached to valves attached at various points in
the system... depending on the momentary inflation, components of
the system interface differently with each other, and the
system's outputs vary accordingly.
in the CNS, ionic conductances, upon which the ability to
generate action potentials depends, do "exactly" this (only
there're no "valves"... the ionic conductances are generated as a
function of ongoing neural and glial activations)... and be-cause
"charge" varies continuously, the result is a stateless,
infinitely-configurable, system.
K. P. Collins
Sergio Navega wrote:
>> Jiri Donat wrote in message <7hoift$1ie$1 at nnrp1.deja.com>...
> >
> >To me, the biggest difference between natural NN and ANN is that every
> >digital simulation of ANN network has a discrete set of states (however
> >large the set is). This "limitation" (if we understand this feature of
> >digital representations of ANNs on today's computers as a limitation -
> >and some theories do) is inherited in our existing tools for ANN
> >simulations - in digital computers.
> >
>> I'm not sure I understand you here. In fact, biological neurons
> seen from "outside" are just things that fire or don't fire, in a
> purely discrete manner. There don't seem to be any other meaningful
> characteristic (such as waveshape or voltage) from the output of a
> biological neuron, just the presence or not of the pulse. But our
> models fail to account for what happens *inside* the neuron, as the
> operation to fire or not fire seems to be the result of a *much*
> more complicated process in the biological neurons than the simple
> weight summing/thresholding of our ANNs.
>> >Every "artificial neurone" is an exactly defined unit which is
> >originally defined using general mathematical functions, but in its
> >computer realisation (if we don't use an analogue computer for its
> >simulation) has a discrete set of states - however complex the original
> >description is. So it can be generally described as a multidimensional
> >table (=combinations of inputs and the output, or outputs, depending on
> >the ANN model). Generally speaking, this table evolves over the time
> >(as the network "learns" and "lives"), but in the majority of ANN
> >models this table could be just extended by some additional columns
> >(weights, threshold) and then it is *static* over the whole life of
> >ANN. So most of today's ANNs are reducible to Cellular Automata.
>> Which turns ANNs into things like Turing machines, and then susceptible
> to be processed by a "symbolic machine". I think that all these
> facts put ANNs into the class of useful things that humans have
> created without much relevance to the "real neuron". What worries
> me is that some research in cognitive science is being conducted by
> using connectionist systems as models and maybe this is worse than a
> rough approximation of the real thing. In particular, the behavior
> of populations of neurons (with temporal synchrony, ensenble "coding",
> etc) may be a more relevant thing to model than ANNs.
>> Regards,
> Sergio Navega.