Stephen Wood <swood at papyrus.mhri.edu.au> on Thu 23 Jul 1998 wrote:
>[...]
>Well, true. Nobody likes being told things about their subject by outsiders.
>Especially when they seem to be lofty over-arching theories of the mind which
>are made up of collections of long and interesting sounding words. Management
>consultants do the same thing (although not relating to the mind, of course).
>>> >> /^^^^^^^^^^^\ The Architecture of a Robot Brain /^^^^^^^^^^^\
>> >> /visual memory\ ________ / auditory \
>> >>| /--------|-------\ / syntax \ | memory |
>> >>| | recog-|nition | \________/---|-------------\ |
>> >>| ___|___ | | | | _______ | |
>> >>| /image \ | __V___ ___V___ | /stored \ | |
>> >>| / percept \ | /deep \------/lexical\----|--/ phonemes\| |
>> >>| \ engrams /---|---/concepts\----/concepts \---|--\ of words/ |
>> >>| \_______/ | \________/ \_________/ | \_______/ |
>> I know this has lost something in the pasting into this message,
> but I don't think it really matters since it doesn't actually
> mean anything. It's all very well to connect things together with
> little lines on a piece of paper (or a usenet group) but unless you
> can define what a 'deep concept' is,
A deep concept is one or more ganged fibers holding a concept of
a phenomenon by virtue of holding all the associative tags which
define and record and remember that phenomenon. Follow links from
http://www.scn.org/~mentifex/ to read the Nolarbeit Theory Journal,
which, although it is enshrined in some kind of monument to mad
scientists at Carnegie Mellon University, is nevertheless the very
instrument by which the above-diagrammed Standard AI Mind Model arose.
> how that is translated into something a neuron can represent and why it
> needs to be connected to 'percept engrams' you might as well not bother.
Just by holding (gathering? focussing?) the associative tags over to
experiential memory engrams, a long neuronal fiber concentrates the
concept-ness of the engrams clustered by logic and by similarity.
For example, just by holding associative tags over to "dog" memories,
the neuronal fiber-gang for "dog" holds the concept of "dog."
But is was only when I coded the AI program Mind.rexx in 1993 and 1994
that I perceived the above-diagrammed necessity for both "deep" concepts
and "lexical" (one for each natural language) concepts. The deep
concepts form "deep structure" in Chomskyan parlance, and the lexical
concepts are the staging ground for Chomskyan "surface structure,"
which however "surfaces" in the auditory memory channel, where words
and morphemes are strung together into actual sentences of thought.
> This group is interesting. It seems to be made up of two major subtypes.
> Those who think that neuroscientists are small-minded, and those
>(like the incomparable Dr LeFever) who try to show that they aren't,
> they just like reasonable arguments!
>> Stephen Wood
http://www.scn.org/~mentifex/webcyc.html Philosophia Divisa in Partes III