home .. forth .. misc mail list archive ..

CAM, the next wave



Yet another crosspost..

-- Eugene

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


CAM, the next wave in computation.

In the last post I said hardware automata machines (CAM) are the
thing of the future.


Why?

Cellular Automata are (mostly 1d, 2d or 3d) arrays of few-state
elements. All elements simultaneously change their states in
discrete steps (t->t+1). Which value the cell takes at t+1 is
totally determined by it's and it's neighbours' states at t.
That's all. Easy, eh?

Now for the good things. CAs are computationally universal as
Turing machines are. Since our current computer's are
implementation of Turing engines, the CAs are an alternative
(and better!) way of doing computation. Even better, CAs,
chameleonic as they are, can efficiently mimic our current
computation paradigm (wires/signals/logic gates), thus making
conversion real easy. But of course, they perform best when we
use them as CAs, namely distributed finest-grained maspar
computation paradigm.

And they are really easy to design and to use. (Hang on, I'll
illustrate).


How are they better?

Look at the Turing engine: it has a head (CPU) with a state
(current register contents) and it does r/w on the tape (memory)
step by step. It moves real slooow in state space. The CA is all
autowrite tape, no head. (Or as many heads as they are tape
cells, each head seeing only it's neighbour states). All cells
could change the next instant, thus hurling the CA wide across
state space in one cycle, leaving Turing a tiny speck in the
distance.

As in the real-world cycles take physical time, the CA obviously
does a lot more work during one operation cycle. The second big
argument is ultimative data locality. As signal propagation in
real space is limited, it makes sense keeping the data we need
in close vicinity. The third argument is simplicity: each cell
needs only a few bits and a _very_ simple circuitry (one
hesitates to call it an ALU) to generate very complex behaviour.
Exactly the latter fact makes them great candidates for
autoassembling 3d-array molecular circuitry we are going to have
in a few decades. (Apart from not making much sense, it is not
exactly easy (=impossible) to build a Pentium from molecules..
brrr!).


What has gone before?

Though CAs were invented in the 50s by John von Neumann (yes,
_that_ von Neumann) in his quest for autoreplicating automata
and research continued, albeit at small rate, until the
mid-80's, they became only fashionable in end-80's/mid 90's. The
recent big hype had one merit: good progress in CA theory was
made. Instead of just watching pretty pictures on the CRT and
speculating on Life's (Conway's 1971 2d binary CA) being an
all-purpose computer, a number of very good researchers, e.g. S.
Wolfram of Mathematica fame ("Cellular Automata and Complexity",
Addison Wesley, forthcoming "A Science of Complexity", ibid),
the Santa Fe complexity racket, the MIT A-Life guys and lots of
small nests of research all round the world have dragged CAs
into focus of common attention, and, even better, brought down
the warm rain of grants and industry research funds.

Though hesitantly, CA applications took wing. Explicit particle
simulations and partial-diffeq-to-rule-mapping allowed
fewer-resource hydrodynamics, dendritic growth, biomorphology,
stress and excitable media modeling, utilizing bad efficiency of
current approximation pipeline (diffeq->appr. lineq
system->matrix algebra). New insights were gained.
Groundbreaking cryptography applications were hinted at. New
hardware CAMs made soft circuitry/evolvable hardware possible,
the possibility of drastically better neural accelerators
appeared on the horizon.


Implementing them critters.

It is obviously pointless trying to emulate CAs on standard
computer: for above reasons it won't perform worth 2 cents. But
computer simulations have their merits: we can change the
neighbourhood definition, the starting pattern and the
transition table (rule) at whim and track/evaluate the
statespace behaviour en detail. In short, a great playground for
research.

There are two basic flavours of semiconductor CAM
implementations. (I skip the more exotic possible nonlinear
optics, semiconductor analogue CA and chemical diffusion CAM
implementations since I don't know anything about them and they
don't scale down to the nanometer structures necessary for
molecular circuitry anyway).

We can go for power and we can go for density. They are mutex.

Incidently, the latter also boosts possible transition
complexity _and_ allows higher topologies, but first things
first. Evidently, we are limited to 2d due to semiconductor
lithography constraints. The latter also vastly inhibits
possible connectivity, thus being the reason why the first
flavour of implementation is limited to flat (2d) automata.

-- to be continued

Semiconductor implementation. (parallel, sequential) Software
implementation. (lookup, recursive lightcone hash) Molecular
circuitry implementation. (creation, operation, I/O to macro)
Performance estimations.

Designing CAs. (gliders, signal/wire/logic (restricted gliders),
neutrino gliders/emitters/detectors, spatial (glider)encoded
messages, self-rep 3d tape Turings, mercury delay storage,
bootstrap, implementation layer hierarchy, mapping hypergrid
message passing architectures)

Basic CA lingo. (spacetime diagrams, light cone, statespace
kinetics, reversibility, attractors/separatrix, representation,
mapping, ergodicity, discrete hamiltonian, energy landscape
ruggedness, rulespace, phase transitions, edge of chaos).

Why CAs, automata networks (ANs) and NNs are virtually the same
thing. (hoo, boy)

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Comments?

-- Eugene

P.S. I know this bey pop science, but this also bey internet,
no?