The S-V-O poles of subject-verb-object have been floating through
my mind for weeks now in representation of my immediate
http://www.scn.org/~mentifex/ Mentifex AI task.
All my previous AI software in
http://www.scn.org/~mentifex/mindrexx.html REXX,
http://www.geocities.com/mentifex/mind4th.html Forth and
http://victoria.tc.ca/~uj797/jsaimind.html Javascript has given
me these floating S-V-O poles to work with. My Mind software
creates these poles but does not yet handle them properly. I am
trying to conceptualize the S-V-O rules that I will encode in
software.
Recently my main desire has been to achieve a kind of "imperfect
interlock" in the reactivation of old associations stored in the
knowledge base (KB). If a sentence has been stored in time past,
I want there to be a strong enough "interlock" that the sentence
tries to reassert itself. On the other hand, I want the
"interlock" to be "imperfect" so that a really strong knowledge
or belief may override the "imperfect interlock" of a stored
falsehood.
It is completely possible to kludge together some software code
that yields the desired "imperfect interlock," but I would like
to find a rather elegant solution -- one that can be easily
understood by many programmers and can be translated into many
http://www.geocities.com/mentifex/webcyc.html#proglangs
programming languages.
Various approaches present themselves to the contemplative mind.
We could fasten upon a verb-node and force the software to lock in
only immediate associations ("pre" and "seq") away from the verb.
We could make the software emphasize forward (S-V-O) associations
rather than backward associations.
In forming a sentence of mental output, we could let partial
results influence the completion of results. Thus we would have
an artificial mind that could "free associate" from any given
word.
Although activation levels in the psi mindcore create the
sentence of output, these psi levels must first be "flush-
vectored" up into the English lexical "en" array where syntax may
grab hold of them.
Now, perhaps we have been doing something wrong by transferring
the entire set of S-V-O activation levels from the "psi" array up
into the "en" array. After all, we have had to distort things a
little by engineering the subject-noun to have slightly more
activation than the object-noun, so that the subject-noun would
be "found" first. Such a barely discriminating process has
introduced the problem of the erroneous wandering of associations.
When we generate a sentence, what are we looking for initially, a
subject, or a verb?
This line of reasoning yields an insight about the interrogative
pronoun "what."
If we ask the current AI, "What kills birds," we are in danger of
receiving the question itself back as the answer. However, if we
think in terms of the AI spitting out a list of multiple answers,
then the first answer, "What kills birds," is bypassed in favor
of a desired answer, such as, "Poison kills birds." The inter-
rogative pronoun "what" then becomes a sacrificial placeholder.
Luckily, the S-V-O framework is so small with its mere three
elements, but so crucially important as the central function of
mind, that we may brainstorm here and amply develop various
exhaustive lines of inquiry. It took the whole AI project over
its years and decades of development, to bring us to this point
where we may productively concentrate our educated attention on
the S-V-O framework.
One of the lemma-style helper considerations in our S-V-O design
is to consider what must happen when the KB is queried with a
single word known to the AI only as a direct object and not as a
subject of a verb. Since it is impossible for the object-noun to
start a sentence, the associative process must first spread the
activation far enough to flush out both a verb and a subject for
the S-V-O framework terminating with the given object-noun. A
single pass through the S-V-O software code might find only the
verb associated with the object-noun, and a second pass may be
necessary for finding the associated subject-noun. Therefore we
want an AI that will be iteratively patient enough to find a
subject-noun and not emit garbage in prematurity.
The way things are in the AI right now, the English sentence
structure simply "harvests" the S-V-O elements from the "en"
array and does not interactively elicit concepts from the "psi"
mindcore array. Since we do plan on handling longer sentences
that include prepositional phrases such as "in the house," we may
have to start letting the English syntax module poke around in
the psi mindcore so as to find and flush out whatever disparate
elements it needs. In such a light, the syntax module becomes an
engine of thought, not merely an expressor of thought.
We could perhaps have in the mindcore activations a threshold
level, the attainment of which serves to activate the syntax
module. In that way, the AI mind would be more natural, more
spontaneous, and less code-bound or rule-bound. In software we
would implement a numeric threshold activation-level for any psi
concept to activate the English syntax module.
Such a mindcore triggering mechanism would get rid of the
longstanding problem where the AI generates a nonsense-thought
out of concepts that are only barely activated.
Such a triggering mechanism might also facilitate the solidifying
of ideas as adumbrated in the more speculative final entries of
http://www.geocities.com/mentifex/theory3.html 1979 in the
Nolarbeit Theory Journal. In other words, associativity from a
highly charged concept in the mindcore might not be enough to
electrify a thought within the mindcore alone, until the syntax
module, triggered by the single concept, forces the associative
ictus to snake through the entire S-V-O framework of a complete
thought. In such a scheme, the syntax module complements the
associative grid of the mindcore.
The idea of letting the syntax module interact flushingly with
the psi mindgrid is perhaps an example of designing something
that has a built-in capacity to grow. We may start with three-
word S-V-O sentences, but we may adventitiously be able to
introduce such add-ons as prepositional phrases or subordinate
clauses.
In a different area of the AI, it may be memetically good to
release publicly in both Forth and JavaScript a patient AI that
waits for user input and does not race ahead in runaway mode.
The central idea here is, leave it to the creative users to tweak
the AI into a default independent mode. An ancillary idea is
that the simple, newborn AI does not have enough bootstrap
material to keep it busy if it tries to indulge in prolonged,
independent thinking. Since each early release is meant as a
teaching AI, it is good to keep the code as simple as possible
and as standard as possible across programming languages.
If we let the http://www.geocities.com/mentifex/english.html
English syntax module interact with the psi mindgrid, we could
let syntax force associations "pre-ward" until it flushed out a
strong candidate subject-noun. The same process will have
activated a strong candidate verb, especially if we let the
associations start going forwards again (or into prepositional
phrases) from the selected subject-noun. This system will depend
on using additive (cumulative) rather than absolute activations.
Even if we go pre-wards, we can make sure that any query-word
begins with such a high activation that the process does not roam
too far away and desert the query-word.
Arthur T. Murray
--
http://www.scn.org/~mentifex/mind4th.html Mind.Forth for robots;
http://www.virtualentity.com/mind/vb/ Mind.VB based on Mind.Forth;
http://www.geocities.com/mentifex/jsaimind.html JavaScript AI;
http://www.angelfire.com/nf/vision/mjava.html Mind.JAVA