William White asked
> What this amounts to is that there needs to be a way of disambiguating
>closely-related but different stimuli, while still correctly categorizing
> noisy, far-related stimuli. ART seems unable to do this.
> Is there currently any self-training model, derived either from biological
> research or from AI, which provides for this? The only thing that I could
> think of was inserting layers between F1 and F2 which would selectively
> disambiguate, but then we're back to the problem of how to train them.
Congrats! You have found the Achilles Heel of ART, the vigilance parameter.
I don't intend to knock ART too much; Grossberg has done a lot for seeing the
need to join EE/CS with Psych/Bio. However, this one parameter controls ( in
a non-predicatble way ) the clustering history of the net.
The most proven hierarchical network also happens to be one of the most complex.
K. Fukashima's "Neocognitron" structure not only distributes "vigilance" across
multiple levels, but also provides a top-down feedback mechanism to allow patterns
to "search" inputs for features that "should be there". Be warned, however, that
few people ( besides Fukashima ) who have tried to implement this net have gotten
useful answers; far too many parameter knobs to twiddle. Check out some of the
early issues of _Neural_Networks_ for references.
Just a EE who would _really_ like to understand just ONE of his thoughts ...
at the neural level!