home .. forth .. misc mail list archive ..

Re: multi/tasking/processing


On Thu, 17 Aug 1995, Christophe Lavarenne wrote:

> Eugen Leitl says that thanks to MMU, non-OS code bugs are trapped by OS, which
> provides for non-halting machines.  What about OS bugs ?  My Alpha happens to

This is one of the reasons for nanokernel: to reduce the amount of 
supervisor (=special) code and hence the number of bugs in it. I would 
consider Taos e.g., to be virtually bug-free, or at least, being able to 
become virtually bug-free in just few years.

> crash/halt about every 2 weeks, and I must reboot my Pentium about every day.
> For how many men*centuries have their OS been "debugged" ?  VMUNIX is about 6
> megs on Alpha, and how many megs is MS-Windows ?

Have a look at Linux or NetBSD386 (the latter is purported to be even
more stable), they can run unattended for weeks and months even on a 
developer machine (provided, you are not hacking the kernel ;).
 
One of Chuck's credos is to keep the complexity/size/bug content down.
This is one of the best reasons for using a nanokernel.

> Small/simple code allowing full control is the best way I know to reduce bugs.

Xactly!

Apropos stability, this is off-key, but I have the German iX Sep 1995
here, which says 25-33% of Unix tools to crashing when exposed to
random input streams (fuzz generator). Crashes from pointer/array
indices were very frequent. Little wonder, GNU tools performed best.

Do you know crashme, which brings any Unix box down real fast by
building and executing random code? I'd wish for a nanokernel
uncrashable with this stern test, since then I would be able
to use genetic programming (GP) directly to breed code, instead
of using a slow but safe software interpreter.

>   I have personnally experienced several bugs in C compilers (MS, Borland and
> Sun) and had no way to correct the compiler.  Even with GNU-C, you need a life
> to find your way around the 80,000 lines of C coding it.

One of several minuses of big compilers. Even Oberon2 with its 60 k (or 
something) compiler is still too big. Fully documented Forth is the thing, 
agreed.

>   Whereas I have written several complete Forth cross-development 
> environments,
> for 8051, multi-RTX, multi-P21, each with target processor simulator/debugger,
> dis/assembler, native code optimizing compiler, with an umbilical interface.
> Each of them requires less than 30 Kbytes on top of a Forth on the PC host 
> side
> and are up and working interactively with less than a few hundred bytes on 
> each
> target side.  And if any, I _can_ correct bugs, and it doesn't take long.

Thank you for providing valuable information from a real-world Forth
programmer.

> I have been also fighting for long with C compilers and "real-time distributed
> OS" on Transputers and DSPs, to find out why they are doing things the other
> way I think, and how I can make them do what I want.  And above everything 
> else

Which DSPs do you use? TI ones? Are there any others having links?

> is the nightmare of debugging multiprocessor code, with a black-box OS and
> debugger between me and the hardware.

The biggest power of Linux/BSD is source code availability. It is big, yet
no black box.

[ extremely interesting stuff ]
 
(.. though I would wonder whether hardware synchronization
is the universal solution).

-- Eugene

> 
> Christophe.
> 
>