This Usenet article is a primary Mind.Forth document.
Arthur Ed LeBouthillier wrote on Thurs, 19 Aug 1999:
> On Thu, 19 Aug 1999 01:32:39 -0400, "John Passaniti"
> <jpass at rochester.rr.com> wrote:
>> 4. Someone who with the ability to decode what it is
>> you are trying to say manages to do what none of your
>> text and ASCII diagrams have done so far-- clearly
>> describe your AI methodology. They'll use common terms
>> instead of neologisms. They'll document and describe
http://www.geocities.com/Athens/Agora/7256/acm.html The Art of
Computer Mindmaking is only the beginning of full documentation.
>> giving the framework and theory of operation. And people
>> will credit *them* for the work, because they made it
>> understandable. Bravo for them.
> As best I can tell, he's trying to build some sort of agent
> architecture. Of course, Mr. Murry is not the originator of
> the idea of agents nor is he the only one developing them.
Creating agents is only incidental to the goal of creating a mind.
> I have not examined the full extent of Mr. Murry's work,
> but I would agree with you that he is obfuscating the
When John Passaniti originally and rightfully complained
about the obfuscation (i.e., using Japanese and Greek names)
in late 1998 on Usenet, I soon thereafter took John's advice
to heart and I made all the variable names as clear as possible.
I confess that I purposefully renamed some MF subroutines with
names like "HCI" and "SECURITY" so that they would shield me
when I posted about them on Usenet in comp.security.misc and
comp.human-factors. I am still struggling with the idea of
expanding my brief names of variables to their full elongations.
> issue by using arcane terminology to describe what it is
> that he is doing. Additionally, he has made few to no
> claims of its capabilities other than that it is "AI"
> and can represent that "horses like hay."
Mind.Forth is a linguistic AI growing by accretion of features.
MF has reached its Amiga size limit and is approaching its
IBM-clone size limit, so it may migrate soon to a 32-bit Forth.
> It appears that Mr. Murry's agent has some kind of knowledge
> base allowing him to assert statements like "cats like milk."
> That's good. World modeling capabilities are vital for an agent.
> Although he has used the word "ontology" and made references
> to Cyc, he has not described the nature of his ontology or
In my admittedly amateurish view, Cyc has been a worthwhile
advance in human knowledge but has unforeseeably turned out
to be a wrong approach to ontological knowledge-bases (KB).
> its completeness (or limitations). He hasn't described the
> basis for his knowledge representation or any limitations
Paraphrasing Houseman, "We make knowledge the old-fashioned
way -- we *innervate* it." Mind.Forth creates a mindgrid of
concepts whose *interrelations* constitute the Cyc-ish KB.
As it turns out in MF, most such interrelations are mediated
by a verb: agent "A" does action "V" to object "O". Verbs
have backward and forward associative tags to their subjects
and objects -- tags currently labeled "pre" and "seq" in MF.
> inherent therein. He hasn't described such aspects as
> temporal representation or other important elements in
The variable "t" keeps constant track of time in Mind.Forth.
The variable "midway" for "roughly halfway back in lifespan"
is extremely important in Mind.Forth as a programmer's tool
to limit the duration of searches and to permit the eventual
recycling of system memory -- on the theoretical basis that
truly important memories will be brought forward by means of
"reentry" and will therefore not be lost to AI consciousness.
> a general purpose ontology.
> He has yet to have demonstrated any kind of inferencing
> capabilities or to have comprehensively described the
> components of said system in the manner that such things
> are normally discussed (and understood). He hasn't described
> the capabilities or limitations of the reasoning system.
Until roboticists put additional sensory channels into Mind.Forth
or its fellow bandwagoneers, Mind.Forth will only know factual
statements about things. Direct sensory experience of the
world will enable a more "hands-on" tangible knowledge.
As the original programmer, I (Arthur M.) am especially
eager to incorporate linguistic negation (no - not - never)
as an essential part of reasoning (and because I see a way
to do it simply by negating verbs) and to code in the use
of pronouns retaining a logical hold on their antecedents
in thought or discourse by means of linked activations.
> Are there classes of knowledge that can't be reached with
> the inferencing system?
The plan is to emulate/simulate/model the human inferencing
system, so the question remains as open for Forthminds as for us.
> One component that appears missing to me, based on the little
> that I have looked at his work is that it lacks some kind of
> teleological (goal) representation. How does the system
> represent goals? How does a programmer provide the goals?
I believe that a programmer (or an evolutionary ecosystem)
provides goals by forcing the organism to iterate repeatedly
through behavior sequences and thought sequences until logical
end-states (satieties?) are reached.
> How does the system reason about goals? Are there limitations
> on what can be inferred? How efficient is the inference engine?
Whosoever follows any Mind.Forth link into the "Cyborg Syllabus"
and down into the archival "Nolarbeit Theory Journal" documents
will find a lengthy discussion about the nature of human will,
or volition. Central to the Mind.Forth mind-model is the notion
that verbal linguistic reasoning "strecthes out" the causative
links between cerebral goals and motor options -- options
carried forward from infancy to maturity like rungs on a ladder.
> I wish him well with his work, but I think that if he used
> some of the standard techniques and terminologies to describe
> his work, he would probably be better respected and his work
> might be taken more seriously.
I/Arthur have no illusions in my John C. Eccles world-view of
"facing reality" and accepting the fact that as my coded product
gets better it will then and only then be taken more seriously.
> One suggestion might be that he write a paper using standard
> terminology that compares and contrasts the Mentifex agent
> architecture against the structure and capabilities of other
For lack of knowledge about those other architectures, I/Arthur
must simply keep on coding and enhancing the linguistic Mind.Forth
structures (negation; pronouns; questions; etc.) and hope for
a dissipation, a dissemination, a spread of the AI principles
embodied in MF into other human minds where anything valuable
will take root and contribute to the general AI future as seen by
> agent architectures. That could be an interesting article;
> I know I'd like to read it.
Thank you, Arthur, for a most thought-provoking treatise. - Arthur
> Arthur Ed LeBouthillier