In the process of investigating "self-training" models, I learned about
ART networks; however I found one potential problem which I wanted to ask
about. It occurs to me that, while the vigilance of the network can vary
depending on success or failure, it varies "globally". However, it seems
that human vigilance varies "locally"; for example, a person among one
race of people finds it easier to distinguish people of that race than
of others. Similar examples can be drawn, I'm sure.
What this amounts to is that there needs to be a way of disambiguating
closely-related but different stimuli, while still correctly categorizing
noisy, far-related stimuli. ART seems unable to do this.
Is there currently any self-training model, derived either from biological
research or from AI, which provides for this? The only thing that I could
think of was inserting layers between F1 and F2 which would selectively
disambiguate, but then we're back to the problem of how to train them.
More generally, has anyone figured out how categorical learning operates
in humans? We must have some way of knowing when to generalize and when
to specialize, right?
Excuse me if my terms aren't correct; I'm an undergrad who suddenly decided
I want to do theoretical and cognitive AI in grad school.
| Bill White +1-614-594-3434 | bwhite at oucsace.cs.ohiou.edu |
| 31 Curran Dr., Athens OH 45701 | bwhite at bigbird.cs.ohiou.edu (alternate) |
| SCA: Erasmus Marwick, Dernehealde Pursuivant, Dernehealde, Middle Kingdom |