IUBio

Toward a Science of Consciousness 1998

Jim Balter jqb at sandpiper.com
Mon Apr 27 20:39:54 EST 1998


Stephen Harris wrote:
> 
> > >Anyway.  The "causal operations" which make computers worth money to
> > >a business are outside the computers themselves.(modlin)
> >
> > No, I disagree.  They are in the computer.(NR)
> 
> This question is off the beaten track, but this looks like an
> opportune moment to sneak it in.
> 
> I was reading about those mainframes which calculate many digits
> of pi. Maybe a billion digits in nine hours, anyway a lot.
> 
> I think I recall reading that the output will differ slightly
> from one model computer running the pi algorithm to another.
> Even I think if two of the same make and model are used.

Since your recollection might be incorrect, or what you read might have
been mistaken, a reference is in order.  There are statistical calculation methods, such as determinations of whether a
number is prime, or whether a point is a member of the Mandelbrot set,
that have a finite chance of producing the wrong result, but the
parameters are usually cranked up so that that chance is vanishingly
small.  If two different machines are yielding different values for
pi, at least one of them is wrong.  But "output" might refer to something else, such as intermediate results that aren't part of the
final value.

> I am not sure for the reason behind this. Suppes has said
> something about: If a system has two or more degrees of
> freedom the system will be subject to sensitivity to initial
> conditions. So I am thinking in the pi calculation situation
> that due to random electronic or magnetic eddies perhaps
> from impurities in the components, this causes the divergence
> in the output of pi.

The calculation of pi is not chaotic, and so is not sensitive to
initial conditions.  Random electronic or magnetic eddies happen,
but clever engineers set thresholds high enough to keep those from
affecting results.

> I have read something about this but am not sure I understood it,
> so would like clarification.

Since you are making the vague claim about something you *think* you
*recall*, you are the one who needs to provide the clarification.

> I think Turing may have made a
> comment about this to a schoolteacher.

About the value of pi being sensitive to initial conditions?
You think that you recall that you might have read this, eh?

> He studied dynamic systems
> in 1951.
> 
> So I wonder if some AI system, emulating a human, experienced
> an internal random fluctuation, might this cause an output if
> it happened at the right moment that could drastically differ
> from an expected human output? Say in answering a question.

Since humans themselves are subject to "internal random fluctuations",
how would we know?  This question would only make sense if "a human"
were completely and precisely specified.  Of course, this is not the
case.  Not only does "a human" not have a precise formal description,
but even if it did, its inputs, including the molecules in the food
it eats and the light, cosmic, and other rays impinging upon it, are
not formally specified.  Can any of these cause "drastic differences"
in the humans' outputs?  Ask the survivors of Hiroshima.

> I have also heard about the need for a randomness generator for
> an AI system. Is this already inherent in the physical system?

Engineers try hard to make sure that randomness in the physical system
is not reflected at the logical level.
 
> I think I have read about the equivalency of CAs NNs indeterministic
> turing machines to turing machines. They can all be simulated
> because they are all 'computable'. Is this right?

Every indeterministic turing machine is equivalent to some deterministic
turing machine "because" that has been proven.  You can show that
something is computable by showing that it is equivalent to some turing
machine -- that follows from accepting the Church-Turing Thesis that
Turing machines capture our notion of "computable" (it's "merely" a
thesis because the word "computable" is not itself formally defined).
You can show that some class of things, such as nondeterministic turing
machines, are all equivalent to turing machines by clever use of logic
and mathematics -- the result is a "theorem".

> Also I read a comment(not sure of its reliability) that connectionist
> models challenge the idea that turing machines are the only type
> of physically realizable computer. I'm not sure I worded that
> right so answer what it is supposed to mean. :-)

Turing machines are not physically realizable.  von Neumann machines
are physically realizable, but no one ever doubted that they are not
the only type.  I can't figure out what possibly unreliable comment
involving connectionist models, turing machines, and physical
realization you mean to refer to, if any.

> Finally, about analog chips. Apparently, high frequency discrete
> responses will not sufficiently model analog human brain processes.

Sufficiently for what?  Note that any *description* can only be provided
to a finite precision; thus, if digital simulation is in some sense not
sufficient to model some analog process, no one could consistently
communicate that fact, and so it would be a moot point -- making
digital simulation sufficient after all.

> Is that right?

Probably not.

> I see this analog issue coming up frequently. It
> seems to me that I have seen noise mentioned as a problem.

Yes, there's a lot of noise mentioned.  Definitely a problem.

--
<J Q B>



More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net