<illustrated version & more @ xs4all.nl/~mervyn>
CYCLOMORPHOSIS
The Functional role of Development
Mervyn van Kuyen (mervyn at xs4all.nl)
SUMMARY
In this article the scope of the theory of cyclomorphosis is discussed by
introducing its properties at various levels of explanation.
Cyclomorphosis will be defined as a set of inevitable functional aspects
of development or online change in non-linear systems.
1 INTRODUCTION: Logic in the real world
Cyclomorphosis is a mathematical theory of historical consistency that
applies to all evolving nonlinear systems. It states that development has
to be 'logged' by an accessible dimension of its components (e.g. local
sensitivity to neuromodulation in the brain) and exploited to allow the
evolution of such a system in the first place, and that this task is
relatively more complex than the evolution or exploitation of its logical
functionality.
In the human brain, the latter can be altered permanently in minutes or
tuned to a new context, at will, in seconds. This does not imply that
logic itself is sparse, but that logical consistency is temporal and
historically defined, just like math itself. Development, on the other
hand, will be shown to be the key to both the long-term orientation and
the immediate stability of an evolving nonlinear system.
An application of this theory has been documented and published in the
article "Feedback in Knowledge-Oriented Neural Networks", introducing a
model for the functional role arousal in the human brain. The article
(presented at the dutch GRONICS'97 conference for AI technology) and the
original source code are available at http://www.xs4all.nl/~mervyn.
2 FUNCTION: The medium and the message
Alan Turing was the first man to define mathematically what a function is.
He proved that all mathemetical functions could be expressed by Turing
machines, capable of reading symbols from a tape and writing or moving the
tape according to its internal causal chains:
This is a collection of chains of logical elements,
looking all at the same spot
on a Turing Tape and moving the tape or writing a
symbol according to its
internal plan: e.g. if the symbol is 1 then
move right.
It can read a program from tape to add, copy and
delete numbers on the
tape in an infinite number of ways.
Today, the distinction between program and data is alive and kicking, just
like it was before Turing's time. However, if we take a look at a Turing
machine in action, we realize that 'data' and 'program' can and do not
have a distinguishable effect on a Turing machine. Their distinction
depends (in this machine) on a little "*" indicating the next instruction
and the implicit definition that the next 'data' element starts at the
first non-space symbol at the right- hand side of the "#":
A Turing machine connects and relates (by following its internal causal
chains) symbols to symbols and expresses results within the same
population of symbols. Like a news medium it tries to find relevant
content (by relative spatial location or by reference matching) and
represent its findings within the same world.
Since its plan contains many-to-one connections the steps of this
programmable Turing machine are irreversible. In retrospect, it is
impossible to find out which two numbers caused the result in the image
above to be 101. The Turing machine itself is a medium that transforms the
message on the tape according to the message on the tape, effectively
giving it a historically defined current state and at the same time, an
unknowable history.
One might be tempted to believe that adding a 'metamachine', that
continuously stores the entire content of the tape, would make history
reversible. However, such a machine would have an unknowable history as
well, causing such an approach to be totally fruitless.
3 INTENTION: Why history matters
The presented Turing machine has a very simple and common end condition:
if the symbol to the right of the "*" symbol is blank, the machine halts.
Similar conditions are used within most programs, the machine stop when
the user gives the right sign, routines stop when lines, pages or files
have been read until the end. In all cases, the symbols on the tape have
to be ordered in a particular way to propel the content of the tape
towards a particular end condition or reference state.
A rather unusual type of Turing machines does not require such an initial
condition. Instead it looks for the program that could have produced the
given data using a fixed initial condition. It could construct such a
program by writing symbols at random for an 'uncrashable' submachine.
This demonstrates the powerful concept of the Universal Turing machine: A
Turing machine can simulate not only any machine, but it can simulate
changing or evolving machines as well. More importantly, it doesn't take
an infinite tape for evolving a machine that is more complex than the
machine that is evolving it. The point of this remark is not to construct
a new homunculus based theory of intelligence, but to expose a rather
unnatural property of Turing machines that is easily overlooked.
Turing machines that are being evolved 'on tape' cannot
operate on any data as long as
they are being manipulated. The reason for this is simply
that there is only
one read
and write head. In the real world, composite systems do
not have such an
artificial
time-dividing constraint.
As a result, real world systems change while they are
transforming their
content. If
these systems are selected for the end conditions they
produce, then the onset
of a
change would serve a crucial role in the function they
perform. In other
words, their
logical structure - as if it were a Turing machine - would
be an inadequate
representation of their functional complexity. The human
brain, which is a
highly
recurrent structure, obviously faces this problem. Some of
its content must
be resident
before and after any synaptic change.
Without a solution for this problem, all the structure a brain acquires in
the quest for its functional enhancement will be useless as soon as the
content is gone. After all, the resulting function is inevitably different
from the 'composite' function that caused the synaptic change to a
fruitful change in the first place.
4 DEVELOPMENT: What trees and brains have in common
Although Turing machines naturally avoid this crucial real-world problem,
there is nothing that prevents the introduction of this problem to the
powerful experimental environment they provide. In fact, Alan Turing's
profound knowledge of chemistry was probably the force behind his talent
for modelling various degrees of interaction in natural systems.
To this day his chemical reaction-diffusion models are being used to
explain the formation of patterns in chemical, biological and logistical
contexts:
Turing's reaction-diffusion equations
explain the
formation of patterns found in living
organisms,
non-equilibrium reactions and stirred
composites.
To get some hands-on experience, poor two different liquid soaps or
lemonade extracts on the transparant side of a cd box and squeeze the
composite by pressing and releasing a second box. Soon a single 'release'
will produce a wonderful growing fractal, leaf and neuronlike shapes.
Unlike the discussed Turing machines these structures can be reduced to
any prior state by moving the surfaces towards eachother. Although this
might seem only practicle in the artistic sense, it directs the attention
to the fact that developmental history can be easily embodied by locally
preserved dimensions (like the diameter of a branch).
As we concluded earlier, the recurrent structure of the
brain is nothing but
an end
product that is, by itself, an unaccessible composition of
temporally related
functions.
These functions can only be exploited by proposing a
mechanism that can
reduce this
composition to various subsets of itself, or more
accurately: 'release'
increasingly
larger subsets, including ever more recent structures.
This does not imply that different subsets serve different functions for
the organism involved, but that the integrated function they embody in
time simply cannot be performed without such a developmental replay
mechanism. This mechanism, cyclomorphosis, has been successfully applied
to the online growth of highly recurrent neural networks. The performance
of these network was defined as the match between a prediction constructed
by the network and the actual input (a repetitive sequence of four 6-bit
patterns):
The graph shows the learning curve in time, using a (vertical)
representation of the network's activity instead of a single line.
The onset of cyclomorposis shows as pattern disintegration and resulting
loss of
performance in the curve. The dots at the
right bottom indicate that the network regresses to an earlier
observed state on the
left. In effect, cyclomorphosis keeps
performance at levels the network cannot naturally maintain.
A simple reset
does not suffice.
EPILOGUE: The brain in Turing's Terms
The brain is not some compiled program but a 'flexible interpreter'. In
computational terms: a flexible but increasingly rigid protocol implying
that its capability is enormous but its flexibility not infinite. It's a
language that adjusts, blossoms, suffices and becomes obsolete. We can
twist and bend it but that will only hurt the medium's historical
consistency, reducing things to what they are. Things.
In simulations, historical consistency has been observed to be more
important to the stability of large nonlinear systems than some immediate
logical consistency. More than anything in the world, neural connections
serve historical consistency as individual connections degenerate
transforming the brain's function, enhancing the match between the brain's
'inputs and outputs'. Where I set out to find intelligence, I found a
phenomenom - cyclomorposis - that puts a special kind of stable systems on
the first place when it comes to complexity, leaving room for a lot of
intention at the same time.
To summarize, cyclomorphosis is a mathematical theory of historical
consistency that applies to all evolving nonlinear systems. It states that
historical consistency has to be 'embodied' in some dimension (e.g. local
sensitivity to neuromodulation in the brain) and exploited to allow the
functional evolution of such a system and this task is relatively more
complex than the functional evolution of a logically consistent medium. It
is the latter which can be permanently altered in minutes and tuned to a
context in seconds. This does not imply that logic itself is sparse, but
that logical consistency is temporal and historically defined, just like
math itself. Obviously, Alan Turing knew that from the start.
FURTHER READING
The Alan Turing Homepage is an excellent collection of
references and
discussions of Alan Turing's life and work, created by
Andrew Hodges,
the author of Turing's biography "The Enigma".
My article "Feedback in Knowledge-Oriented Neural
Networks"
Presented at GRONICS'97, the dutch conference for AI
technology.