IUBio

Knowledge Base Or Bust

Owen Nieuwenhuyse nieuweo at ezysurf.co.nz
Sun Dec 16 05:39:06 EST 2001


Arthur T. Murray <uj797 at victoria.tc.ca> wrote in message
news:3c1a73be at news.victoria.tc.ca...
> KNOWLEDGE BASE OR BUST
>
> Today after a hiatus of two and a half months we resume coding
> http://mind.sourceforge.net/index.html the AI Mind in JavaScript.
> First we change the "description" meta-tag to read:  "Please find
> room for this AI as mind.html on your own website so that others
> may obtain it."  The idea is to let the AI spread memetically
> across the Web.
>
> Our next goal is to obtain a working knowledge base (KB).  That
> goal is immensely worthwhile because, once our KB works reliably,
> it will be like the http://www.cyc.com Cyc project, and we will
> be able to pack the KB full of all kinds of mundane or expert
> knowledge.
>
> The source code left over from 29sep2001 still gives a transcript
> with faulty KB reasoning:
>
> Human:  cats eat fish
> Robot:  CATS EAT FISH
>         32   28  10
>
> Human:  fish eat bugs
> Robot:  FISH EAT BUGS
>         20   41  10
>
> Human:  bugs eat dirt
> Robot:  BUGS EAT DIRT
>         20   41  10
>
> Human:  cats eat
> Robot:  CATS EAT BUGS
>         10   41  10
>
> The AI gives the wrong answer in the fourth exchange because the
> concept of "bugs" has retained too high an activation.
>
> I ran the above four exchanges through only in Transcript mode so
> that I could inspect the logic of the reasoning of the AI. Then I
> went back and used the same inputs in Troubleshoot mode.  I
> obtained the same results, but this time each line of output had
> the concept-activation numbers interspersed among the English
> words of the output.  I look at those three-number sets and I
> think, gee, maybe I should have written the numbers down beneath
> each of the four lines of output that I transcribed above.  Then
> I realize that I can run the same inputs all over again and write
> down the numbers in red.  Here goes.  See above for the results.
>
> Well, what do you know?  The output activation-levels were the
> same (20, 41, 10) in the middle two exchanges but different in
> the first (32, 28, 10) and in the fourth (10, 41, 10), where I
ON:
What do you use these "activation levels" for?
Do you have a simple explanation of how they relate to axiom processing?
Do you have a detailed Q/A dialog structure?





More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net