home .. forth .. misc mail list archive ..

Re: multi/tasking/processing


On Wed, 16 Aug 1995, Francois-Rene Rideau wrote:

> ">" is Eugen Leitl
> ">>" is Penio Penev
> 
> [About Eugen's proposal for two register banks, one for the OS]
> > But at least the OS, which gets called very often, does not need
> > to be swapped out.
> [...]
> > E.g. imagine we are calling OS.SendMessage() all the time (in
> > an OO environment, probably the most often used OS call).
> > On some machines the opcode would mean CALL OS.SendMessage(),
> > on some a dedicated instruction OS.SendMessage(). No need to
> > recompile.
>    Why should the OS be called often ? This is a very Unixish and
> completely silly idea ! If you need faster OS switch, have it switched

Each time the network processor or the video or I/O processor
generates an interrupt. Each time the next task must be switched.
Each time we send a message. (The last happens a lot).

Call it OS, call it whatever. Certain methods, I choose to assotiate
with the "OS" object (node13.OS.flush_1(objectID)) wind up called
lots of time.

> to SRAM: if you need fast IO, use a specialized coprocessor, not tight

Agreed, the speedup might be not too dramatic, but we can't shoot
the OS, if it is write-protected. Surely, this must be desirable?
I don't need fast I/O beyond of what F21 already has, though
I would wish for 8 bit precision. (..Now hear the beggar galopping.)

> interrupt driven code; if you need to pass messages quickly, pass them
> directly, and to not waste the stupid overhead of a micro-kernel.

Microkernels are too big, nanokernel is the thing. And how
do I send a message to an object transparently? I don't know
where the object currently is, hence OS.Send(ObjID, msg);
is the only clean way of sending the message. Actually, as I 
have already proposed in misc somewhere, a dedicated instruction 
SEND, interpreting stack contents as a message would be nice.

> [Microkernel are the most stupid thing ever invented: concentrated
> overhead without any functionality. "Exokernels" are the solution:
> no more kernel at all, but a dynamic linker that links objects directly
> one to one another]. A centralized dispatcher is to be used only when

Object's data is private. Some object methods are private, some
public. (Verbatim and threaded methods). I can't know to which
objects I will be talking to at compile time. How can I compile
direct method calls without runtime checks?

> run-time speed is unimportant.

There are many shared methods. Kill, flush, retrieve, relocate, etc.
which each object must understand. Instead of having stupid redundant
local jump tables and a dispatcher, the message gets diverted to the 
according OS methods by a central (preferably hardware) dispatcher, 
only then branching to local dispatcher. 

Why not?
 
(O.k. We can use hierarchical subspace addressing in the object ID with
hardware check, and an assotiative memory to retrive the method address
from OID and messageID, but.. I rarely make my own chips at home, not even
potato chips. Do you?)

> >> How would you define an "OS"?
> > The interface between the hardware and the software. An insulating
> > layer, providing an consistant surface. Plus resource manager.
>    I think this is a bad definition for an OS. I try to give a better
> one in the TUNES project.

This was an ad hoc definition. I will have a look at your home page.
 
> > A special object with fixed ID, an instance in each node.
>    This is better. But then, this definition could fit many an object.
> If you try to define it as "maximal object common to each node",

Of course. There is a minimal set of methods bound to each node,
the bare-bone OS infrastructure. Other methods (gfx, maths, sound,
etc.) gets loaded/flushed on demand. Of course.

> then you get my definition for an OS; but then you should be conscious
> that if you consider a set of nodes running the same application,
> this application will thus be part of the OS.

Let us restrict the OS to housekeeping, ok? An application should
typically have no idea where an object is and how the messages gets
routed. An application should rely on the "+" method being available
for integers right from the start. Etc.

> >> If the "OS supervisor task" works 100ns worth at 1ms intervals, would you
> >> dedicate half the chip resources to make it work 50ns at 1ms intervals?
> > 
> > Though 1 ms is probably too long, you are right, here. But how about
> > interrupts? OSCalls? Your code will be seething with OSCalls. Every 10 
> > opcodes there will be one.
>
>   If you need very fast regular interrupts for something, then you'd better
> have a specialized circuit for that task (much like the F21 "coprocessors",
> not interrupting your generic ALU. It will cost less chip than a complex
> interrupt manager and register bank switcher, and will sure yield far better
> results.

Of course SEND will need a dedicated circuit. But my name is not Chuck. M.
and I don't own a silicon foundry. Ask Chuck about hardware OO support,
let's see what he'll say.
 
> >> Reentrancy we need. But why do we need memory protection? A program, that 
> >> needs memory protection is a buggy program. I'd rather have a simple
> > There is no such thing as a bug-free program. Particulary, a large
> > bug-free program.
>
>    Well, there can be crash-proof programs, using *high-level* crash-proof

You can eliminate e.g. dangling pointers with garbage collection and
do array index checking. You can introduce trap handling. But
you still will have buggy software. You can't sample the 
entire input space and you can't generate a bugless program
from perfec pieces. All this can reduce the number of bugs
significantly (at the cost of loosing some power), but not
eliminate them altogether.

> tools, like strongly typed languages, and correction proof software.

Proofs are worth nothing in the real world.

> I'm sure this is the only possible long-term future for the software
> industry, in the same way as I'm sure MISC is the only possible
> long-term future for the hardware industry.

These methods will come, but I essentially think nonalgorithmic
systems are the only robust ones. MISC being the future? Right.
Grant mainstream a decade to realize it, though.

> [About MMUs]
> MMU take a lot of chip resource for specialized operations of questionable
> efficiency. I'm convinced that investing the same chip size in more P32-type
> general units and/or fast SRAM yields far better results.

If you burn 1 Mtransistor instead of 200 for a skeletal MMU, you are right.
I think it not questionable to have my OO code armored against buggy
applications.

> If you want security, use secure tools (strongly typed languages with
> bound checks) have it "optimizing" tools if you want efficiency.
> As for debugging, emulators can provide programmable protection much finer
> than MMU-based debuggers.

Agreed. But you can't run your software on a software interpreter all the
time. As you can't ferret out all bugs during debbugging.

-- Eugene
 
> --    ,        	                                ,           _ v    ~  ^  --
> -- Fare -- rideau@clipper.ens.fr -- Francois-Rene Rideau -- +)ang-Vu Ban --
> --                                      '                   / .          --
> Join the TUNES project for a computing system based on computing freedom !
> 		   TUNES is a Useful, Not Expedient System
> WWW page at URL: "http://www.eleves.ens.fr:8080/home/rideau/Tunes/";
>