On Tue, 17 Feb 2004 12:15:11 +0000, David Longley
<David at longley.demon.co.uk> in comp.ai.philosophy wrote:
>In article <40311aad.36469116 at netnews.att.net>, Lester Zick
><lesterDELzick at worldnet.att.net> writes
>>On Mon, 16 Feb 2004 16:26:27 +0000, David Longley
>><David at longley.demon.co.uk> in comp.ai.philosophy wrote:
>>>>>In article <4030e04a.30726781 at netnews.att.net>, Lester Zick
>>><lesterDELzick at worldnet.att.net> writes
>>>>>>>>Hi Eray -
>>>>>>>>I certainly agree with what you note here. The problem with arguments,
>>>>rationales, etc. is that they are only about as useful as people's
>>>>comprehension of them. I think they are conclusive once understood but
>>>>Neil considers them totally or mostly word salad and you seem to be
>>>>somewhere in the middle.
>>>>>>>>But I'll say one thing for the arguments, they're brief. So they admit
>>>>of evaluation in pretty straightforward terms. The only complicated
>>>>rationale is for S "differences between differences" resolution of
>>>>Russell's paradox and I'll be posting more on that in a few days.
>>>>>>>>The unfortunate thing is they don't have any obvious direct relevance
>>>>to immediate issues in ai as the subject stands. The only significance
>>>>I can think of at the moment is that these ideas indicate that the
>>>>idea of actual sentience in ai is really something more than programs
>>>>and whatever one chooses to project as ai in turing terms.
>>>>>>>>This latter is more on the order of robotics or in cognitive arenas
>>>>what I refer to as artificial neural turologies - ants. Which I find
>>>>nothing wrong with because it will probably prove more useful than
>>>>actual models of general cognition. However as Jim Bromer points out
>>>>in his Re: Reasoning and AI yesterday, it has been the case that
>>>>designers and programmers have thought they were more or less
>>>>discovering and writing equations of cognitive behavior and sentience
>>>>with their programs and that has definitely not proven to be the case.
>>>>So I consider that it would behoove ai architects to understand why so
>>>>they can reconsider whether they are aiming at actual cognition or
>>>>just robotics and the difference between the two.
>>>>>>Go and find out about *discrimination learning*.
>>>>>Yeah. David I've become havituated to your presence in terms of the
>>clinical definitions offered by Neil Rickert. You have nothing to add
>>to these conversations except claims of extraneous proof. So unless
>>you have something new to offer I suggest you find some other fields
>>to fertilize besides my own.
>>>>Regards - Lester
>>>>A few questions: 1) have you looked into what discrimination learning is
>all about and considered why I keep suggesting you look into it? 2) Have
>you had a look at the Bennett and Hacker book or even a review of it? 3)
>Do you see any similarities between your behaviour and that of Collins?
>--
Everywhere I look, David, all I see are your transparent forensic
attempts to alter questions of truth and falsity of various issues to
redundant questions of behaviorist scholarship. I don't doubt you are
a behaviorist scholar. I do doubt you are relevant to discussions of
truth and falsity. At least you do not establish your relevance to
anything except the codex of behaviorist orthodoxy.
David, you are a blivit - that's ten pounds of shit in a five pound
bag. And like shit you just tend to hang around and have a hard time
cleaning up. By your standards of trite habituation Glen is only a
semi blivit - 7 or 8 pounds of shit in a five pound bag - because he
occasionally has something germane to offer.
Regards - Lester