home .. forth .. misc mail list archive ..

Re: multi/tasking/processing


Francois-Rene Rideau:
   >       Nano- or Micro-, they are the same. The principle of a kernel
   >    itself is deeply broken. What is a kernel ? Just some overhead to
   >    dispatch routines at run-time. It just has no point. Code should be
   >    bound whenever the binding is known, not before (not possible!),
   >    not after (and at least, just once, not once per call).

Raul Miller:
   > Er.. how do you re-use resources used infrequently by processes?

Francois-Rene Rideau:
      What is the problem ? The same way as you always did, or any way
   you like ! I do not see why you need a kernel to do that. Sure you
   need resource managers and (dynamic) linkers, but I see no point at
   a run-time dispatch center (well, until the 70s, dynamic linking
   was too memory hungry, so a kernel was necessary).

I don't know if you're not paying attention to what you wrote, or what...

If you've dynamically linked to some bit of code, or dynamically
inlined it, what do you do when you need to re-use the space occupied
by the code?  At that point, your code isn't there any more.  So, what
do you call the code which takes responsibility for implementing this
re-use and managing calls to this code?  How is it implemented?  [This
kind of code is the heart of any kernel.]

In some cases, the graph of process control flow is simple enough
(e.g. linear) that redirecting the flow to adapt to re-using memory is
trivial to implement.  However, there's stuff that has high implicit
re-use, even if it's typically run once a day or once a week for
months on end on some random system.  The kind of integration you're
talking about is only going to be workable if the entire system has a
cohesive overall design goal -- some systems are composed of a variety
of components from different sorts of vendors.

There are some systems where this sort of functionality is not
implemented in any general fashion (e.g. embedded systems which have
enough real memory).  There are other systems where this sort of
functionality is implemented in a general fashion (e.g. systems which
have a kernel).  The distinction between these kinds of systems
generally has to do with the nature and volume (variety) of their
communications.

There is no problem with not having a kernel if you don't have to do
this kind of bulk management of code or data.  If I'm building a
system that needs to communicate 1 bit every few weeks (which can be
quite profitable, if it's an important bit of information), I'm more
than happy to do without a kernel.

If I'm building a system that needs to communicate many megabytes (or
terabytes -- I'm trying to be qualitative here) of information per
day, in a bursty fashion, with some of it preserved for arbitrarily
long intervals of time, I'm going to start wanting the features a
kernel provides.  I might still do without a kernel if throughput is
important enough to justify the effort, and hardware is expensive
enough to be a significant obstacle.

On a fine-grained level, the value of a system is determined by the
cost of doing without.  This is true for kernels, shoes and food.  The
cost of a system is determined by market issues (value, availability
and competition).  When you say "the principle of a kernel is deeply
broken," I'm not sure if you're saying that it has trivial value or
excessive cost.  I feel that there exist applications where kernels
have trivial value and a high cost -- but I know this is not true of
all applications.

-- 
Raul