home .. forth .. misc mail list archive ..

Re: networked processors, neurons and C


Dear MISC readers:

"Richard S. Westmoreland" wrote:
> Wouldn't a configurable topology raise a lot of overhead in
> multithreading?  (Assuming an OS would allow it once it
> was booted)

I don't see why.  One of the nice things about the Linda
approach is that the number of nodes and topology can
change while the system is in operation, even unplugging the
master node.  The system is doing dynamic load balancing
when executing general purpose symetric parallelism so
it can adapt very quickly to real time changes and 
doesn't need to carry much overhead to do so. Of course
we have never been talking fine grained parallelism.  It is 
also a way to provide redundancy for fault-tolerant systems.

Since F21 has so many I/O pins that can be used to control
some external switches or can be used as I/O lines with
software there is a great deal of flexibility one could
have in making multiprocessing systems.  They could
connect in a star, ring, a grid, mesh, multi-dimensional hyper-
geometry, reconfigurable geometries, or all of the above at once 
with software able to configure which topology is best to use 
at any given time.

I have been studying to how a single small bit of code could
do the above by implementing evolving neuronal groups in
software as described by G. Edelman.  It might be the best way
to utilize the neuron like architecture in the
serial/network and parallel interfaces on the chip.  It also
gives the clearest picture of the requirements for the
processing and communicaitons hardware.  How are nodes
networked in neuronal groups?  It works pretty well doesn't
it? It is efficient enough for you to read and possibly
understand what I intended to convey. ;-)

This kind of application demonstrates the intented target
better than anything.  It also demonstrates the key issue
that Dr. Koopman discusses in the context of expert systems.
They can be written in C, Forth, OPS5, Java, whatever.
The point being that the kind of executable data 
structures that will be generated by good software
will look very similar regardless of the source language
or target architecture except that they can implemented
about as efficiently on a stack machine as in a C machine.
C cannot be as efficient in a simple stack machine as
it can on expensive architectures that make C happy
but the bottom line is you don't need it unless you "must"
use it as a C machine to do other things.

You can generate equivalent code on a C machine and you
can execute equivalent code on a C machine.  The issue
is that using C is what has the 10^6 higher cost performance
requirements than not using it....
This is where you find the references showing that this
type of code on cheap stack machines is about 10^6 more
cost performance efficient that the "industry standard" 
approach of using general purpose and popular software with
extra layers of inefficienty and expensive hardware that is
really much more complex than you need to do efficient AI.
The 10^6 and 10^7 numbers were before Chuck anounced that he 
wanted to add another 10^3 or 10^4 to the equation by making 
highly optimized hardware in custom VLSI.  It was also before 
people like Dr. Brad Rodriguez published all those lovely tricks 
in Forth to make distributed knowledge systems even more efficient.  
And it was before I discovered what Edelman had published.

It still doesn't quite add up to the level of human inteligence
in cheap appliances but it is getting much closer.  Check out
Hans Morovec's latest book part of which is only and do the
math for where an F21 SMP would fit if produced in large volume.

Jeff Fox