>>For two years now, I have been exploring the notion that the most
>>important product of our brain - imagination - could well be an effort to
>>*neutralize* our sensory input, preventing it from penetrating the neural
>>networks of our brain.
>>This reminds me of a filter model of intelligence, where an
>intelligence takes in all available sensory input, filters out that
>which is irrelevant or unacceptable in some way. Only significant
>information is processed, or info which the intelligence can cope
>with, or process.
You seem to mix up neutralization with plain suppression (inhibition),
neutralization requires precise knowledge of the incoming pattern
and since the construction of this signal requires time, the system
is effectively anticipating. This anticipation is the main process,
not the processing of what does come in (although the value
of what does enter is very high, since it indicates a lack of knowledge
about the current environment).
>> The results include a computer program that can
>>effectively construct such a network and a number of (in my opinion)
>>This sounds interesting in itself. Would you say the computer program
>models a specific information filter?
I tend not to call it a filter because the system is not disposing
information that is not already in and exploited by the system.
>>1 How can a network learn to neutralize its sensory input?
>>>By linking the neutralization level to global neural growth and
>>starvation, connections that increase neutralization can be collected over
>>time: Increasing neutralization strengthens all connections and grows new
>>(weak) ones. Decreasing neutralization weakens all connection, destroying
>>the weakest (new) ones. Note that a single *quantity* is monitored in
>>order to develop a *quality*.
>>This sounds to me like a network based on inhibitors rather than
>reinforcement.
This network matches its input with its predictions, if the two
match the input remains inactive. This match is a logical EXOR
function, which can be performed by a simple combination of
two excitating and one inhibiting connections.
>>The increasing gain of the network proved to be a great obstacle for its
>>development. New connections were selected for their neutralization of
>>already sparse patterns (due to the effect of the network already in
>>place). However these connections proved to be far too sensitive to face
>>the original, complete input. In other words, the network could not simply
>>be switched on facing a familiar pattern: it had to 'replay' its evolution
>>very quickly in order to regain the original level of performance. The
>>neural correlate of this mechanism is suggested to be the modulating
>>effect of our *diffuse systems of arousal*: more recent connections are
>>suggested to be more sensitive to this modulation, resulting in the
>>required (partial) 'replay' of the network's development with each EEG
>>wave.
>>This would seem to explain why people go into shock, or have "nervous
>breakdowns".
Check out my other (very similar) recent post 'From synapse to
Soul'. In this post I mention epileptic seizures in this context.
>Do you think it's necessary to create an internal model of the outside
>world? Or perhaps an intelligence only needs to filter out useless
>information?
To filter out known information (useless to process again and again)
you will need that internal model just as well!
Regards, Mervyn