IUBio Biosequences .. Software .. Molbio soft .. Network News .. FTP

Whence cybernetics?

Jacob Galley gal2 at kimbark.uchicago.edu
Sun Jul 4 13:01:20 EST 1993

I received the following reply, and figured I might as well post it.
(I've added comp.ai.neural-nets to the list, since I now know it exists.)


Date: Sun, 4 Jul 1993 00:41:16 -0700 (PDT)
From: Melvin Rader <radermel at u.washington.edu>
Subject: Re: Whence cybernetics

First off, I'm not posting this because I can't -- I just found this modem
in the parents' computer a couple days ago, and I haven't yet figured out
how to deal with my system's editor for posting to usenet.

Anyway, in response to your question:

	By cybernetics, I take you to mean the study of neural networks
and connectionist models of artificial intelligence.  By no means is it
dead, or even all that obscure.  As an undergraduate at the Evergreen
State College in Olympia, WA this year I took four credits of
'Connectionism' and another four of programming of neural networks.  I
believe there's a newsgroup devoted to neural networks as well.
	Seymour Papert has written a whimsical account of the history of
network vs. symbolic approaches to artificial intelligence:

	"Once upon a time two daughter sciences were born to the new
science of cybernetics.  One sister was natural, with features inherited
from the study of the brain, from the way nature does things.  The other
was artificial, related from the beginning to the use of computers.  Each
of the sister sciences tried to build models of intelligence, but from
very different materials.  The natural sister built models (called neural
networks) out of mathematically purified neurones.  The artificial sister
built her models out of computer programs.
	"In their first bloom of youth the two were equally successful and
equally pursued by suitors from other fields of knowledge.  They got on
very well together.  Their relationship changed in the early sixties when
a new monarch appeared, one with the largest coffers ever seen in the
kingdom of the sciences:  Lord DARPA, the Defence Department's Advanced
Research Projects Agency.  The artificial sister grew jealous and was
determined to keep for herself the access to Lord DARPA's research funds. 
The natural sister would have to be slain.
	"The bloody work was attempted by two staunch followers of the
artificial sister, Marvin Minsky and Seymour Papert, cast in the role of
the huntsman sent to slay Snow White and bring back her heart as proof of
the deed. Their weapon was not the dagger but the mightier pen, from which
came a book - Perceptrons ..."

	Minsky and Papert's book did effectively kill further research
into neural networks for about two decades.  The thrust of the book
was that with the learning algorithms that had been developed then, neural
networks could only learn linearly separable problems, which are always
simple (this was proved mathematically).  Networks existed which could
solve more complicated problems, but they had to be "hard wired" - the
person setting up the network had to set it up in such a way that the network
already "knew" everything that it was going to be tested on; there was
no way for such a network to learn.  (The book also raised some other,
more philosophical concerns.)  Since learning was basically the only
advantage neural network models had over symbolic models (aside from an
asthetic appeal due to their resemblance to natural models), research into
neural networks died out.  (Also, NN research is associated
philosophically with behaviorism - NNs solve through association.  When
behaviorism died, it also helped bring down the NN field.)
	However, in the late 70's (I think) the 'backpropagation training
algorithm' was developed.  Backpropagation allows the training of neural
networks which are powerful enough to solve non-linearly separable
problems, although it has no natural equivalent.  With the development of
backpropagation, and with the association of several big names with the
field, research into network models of artificial intelligence revived.
	I understand the term 'Connectionism' to apply to a field which
draws from neural network research and research into the brain.  In
contrast to whatever book you were quoting from, I understand
connectionist thought to be at odds with the symbolic approach to
artificial intelligence.  A good book to read on the subject is
Connectionism and the Mind by Bechtel and Abrahamsen.  It is a good
introduction to connectionism and goes into the philosophy behind it all,
although some of the math is off.


* What's so interdisciplinary about studying lower levels of thought process?
				  <-- Jacob Galley * gal2 at midway.uchicago.edu

More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net