IUBio

NN formats

Jiri Donat jiri_donat at hotmail.com
Mon May 17 03:03:10 EST 1999


In article <373EC872.80D8C02D at tig.com.au>,
  Anton Funk Tism Trees <gatx at tig.com.au> wrote:
> "S. Lorber" wrote:
>
> > Is a NN supposed to mimic the human brain and how sure are we of
that?
>
> The fundamental building block of an NN, usually called a node, was
modelled on
> animalian neurons (human, worm, insects, etc, etc). They get inputs,
combine them
> somehow, and produce an output, which may feed to other nodes. Way,
way back, it was
> hoped/feared that a large enough network made of zillions of
identical nodes connected
> together would equal or even surpass human consciousness. Arnold
Schwartzenegger first
> came into my living room as the agent for such an NN (and I'm still
trying to get rid of
> him).
>
> Nowadays, NNs are not so scary. Their use is to approximate
complicated functions, by
> being "trained" on a set of input-output combinations. This process
is usually much more
> efficient than trying to explicitly analyse the function under
consideration. Some types
> of functions turn out to be very amenable to approximation by and NN,
while others fail.
> I know of several well-funded research projects trying to use NNs to
recognise patterns
> of "insider trading" on the stock market, and to predict stock market
trends for
> advising investors. (Will the first type of NN ever help bust the
second type? T101 vs.
> T1000?)
>
> NNs are made of collections of nodes, but the higher up the scale of
complexity you go,
> the further you get from the biological model. Even at the node
level, real neurons
> operate by producing pulses at varying rates; most NNs I'm aware of
output a
> floating-point number. The significance is that artificial NNs are
able to produce
> results in a single pass from input to output; real NNs are very
slow, and rely more
> heavily on massive parallelism. Another area of divergence between
NNs and brains is the
> dimension of time. This is an area of active research in NN theory.

To me, the biggest difference between natural NN and ANN is that every
digital simulation of ANN network has a discrete set of states (however
large the set is). This "limitation" (if we understand this feature of
digital representations of ANNs on today's computers as a limitation -
and some theories do) is inherited in our existing tools for ANN
simulations - in digital computers.

Every "artificial neurone" is an exactly defined unit which is
originally defined using general mathematical functions, but in its
computer realisation (if we don't use an analogue computer for its
simulation) has a discrete set of states - however complex the original
description is. So it can be generally described as a multidimensional
table (=combinations of inputs and the output, or outputs, depending on
the ANN model). Generally speaking, this table evolves over the time
(as the network "learns" and "lives"), but in the majority of ANN
models this table could be just extended by some additional columns
(weights, threshold) and then it is *static* over the whole life of
ANN. So most of today's ANNs are reducible to Cellular Automata.

So artificial "neurons" are from this (pragmatic) point of view a rude
simplification of natural neuron.

This difference leads to even very strange theories and speculations
(e.g. the complexity of neural network is hidden in the complexity of
neurons). Some well-known theories even speculate on sub-neuronal
processes being responsible for our consciousness...

> One basic aspect is
> that the chemically-based excitation of a neuron leaves takes time to
decay, so the
> behavior of the system is influenced by its recent history. Such
phenomena would be
> extremely undesirable in typical NN applications. Then there are the
non-electrical
> factors in brains: hormones, etc. carried in the blood. And so on.
>
> Most NNs I've come accross, including the 2 I just mentioned, are
entirely unlike
> flesh-and-blood, because they are trained by a technique called back-
propagation.
> Training sets consist of pairs of inputs and "correct" outputs.
Weights on the internal
> connections are adjusted to bring the NN closer to producing the
given output for the
> given input. Learning for us animals is done by an entirely different
paradigm.

Well, who knows that?...

> Pleasure
> and pain, for instance, are absent from (most?) NN models.

And certainly should be, because they are a part of different layer. If
I am building a computer, I am also not unhappy because "quadratical
equations" are not a part of my hardware...

> Additionally, human brains
> seem to come with a certain amount of structure built in thanks to
evolution. Language
> abilities are foremost in the list of such inherited structure,

I would say the ability to construct relationships between objects
(e.g. between black and white horses, but also between natural objects
- the horse - and the artificial one - the word "horse")

Best regards from Prague

--
Jiri Donat (jiri at calresco.org)


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---



More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net