home .. forth .. misc mail list archive ..

Re: future computers (sorta OT)


At 12:57 AM 6/22/00 -0500, Greg Alexander wrote:
> >Anybody here a fan of Star Trek?
> >
> >I know this question sounds silly, if not juvenile.  How do you think 
> the Ente
> >rprises (or Voyagers) computer system processes?  Is it trillions of 
> MISC type
> > optical processors, or it is one giant Pentium XVII?  Has there ever 
> been an
> >episode where the captain says "hail them", and at a push of the console 
> the v
> >iew screen gets a fatal protection fault and the whole ship self destructs?
> >
> >I think that is a legitimate, however comical question.  The ship's 
> computer c
> >ore is no larger than some of the super computers we have today, 
> everything el
> >se is just terminals, subsystems, and storage.  Maybe in a few decades 
> Forth w
> >ill not reign supreme, maybe another language that is even faster and 
> smaller.
> >.. but the minimal approach is not a limitation - it's powerful.
>
>Star Trek presents an interesting side of computing.  Some things about
>the basic setup: there is a central computer, it is very large; localized
>systems are governed by (iirc) "opto-linear" circuits which look like
>skinny domino game pieces with different patterns on them [orange plastic]
>that can be slid into slots; the android does not interface naturally [i.e.,
>he interacts in the way that humans do] with a computer, though he uses
>the computer with extreme speed; once infected with a virus, manually
>shutting down the computer then manually bringing it back up with new
>virus-free software would take weeks or so; the computer never crashes,
>but is prone to more user-level user error [i.e., the OS never GPFs but if
>it was programmed to let star fleet remote control it then, by golly, star
>fleet can remote control it even if Kahn thinks he is in command].
>         That's all pretty boring so far as I'm concerned.  There is only
>one important and relevant computing issue going on in Star Trek, I think.
>That is, allowing the experts to be mostly ignorant.  It's assumed that
>everything done by those who really built the system works perfectly so
>there's no good reason to look under that level of the hood.  When Geordi
>needs more power, he reroutes stuff in a graphical "connect the dots"
>interface.  When Geordi needs to circumvent some negative act of the
>computer [foreign control, for example], he either shuffles opto-linear
>chips or he uses a special probe tool thing that seems a lot more
>automatic and high-level than something like what we would call a portable
>reprogrammer.  The entire "atom" of computing is moved up a huge level.
>Right now some people's computing atoms are transistors, some people only
>see gates, some people only see microprocessors, some people only see
>systems, other people only see program code.  On Star Trek the atom of
>computing is only something you can hold in your hands: it is the
>opto-linear chip or the <insert techno babble here> tool or the high-level
>software.  Even the experts only think of system-level components, they
>don't worry about program code or even act as if they are aware of it --
>they certainly know nothing of gates or transistors.
>         One may assume that there's a huge pile of engineers in the
>background who built this huge complex system that has a perfect level of
>abstraction (perfect in that there's never any good use to violate it, for
>example, to work around a bug or do something unexpected).
>         I vaguely suspect something like this is happening in aviation --
>at any rate aviation would be a great place to look for where computing
>needs to be used in a similar environment to star trek [though it's often
>not very futuristic].  The airplane doesn't have a processor and memory
>and i/o circuits, it, it has a little black box with a few clearly-
>labelled connectors.


I work in aerospace. We have the worst of all possible worlds.

Too much complexity. And change is very difficult.

Despite what you would think power and weight in the computing elements is
not a big issue.

Re-using ugly bloated code that is hard to test is an issue.

We don't solve the problem with better thinking. We just throw  more
resources at it. ( 200K a year contractors are the norm in the business
these days. Because you need to be awfully smart to do something this
stupid)


>These black boxes are not interchangable between
>airplanes, I'm sure, but it doesn't really matter -- it's not abstraction
>for reuse, it's abstraction for "don't fuck with this unless you really
>know what you're doing."

True.

But with no real competition prices will stay very high.


>The guy at the airforce base hangar who is a
>brilliant wiz and can fix all sorts of problems -- he doesn't know jack
>about computers...he only knows [and only needs to know] about this box.
>He never opens the box...if the computer behaves erratically, he just
>tosses [or perhaps gives to some removed review/repair group -- either
>way, it's out of his hair] the old one (I'm sure military computers
>are expensive, but they're practically free compared to military
>aircraft).

When you have shit piled on shit everything gets expensive.


>         You'll note that the airforce is almost definitely not using
>cutting-edge processors.  I hope, at least, that they're using early-90s
>(or before) processors that have been thoroughly tested and with software
>designed specifically for the new processors.

No. We use 186s. 386s in cutting edge designs.

The development software is COTs i.e. Microsoft bloat ware. It produces
bugs.

C++ is verboten because of hard to detect bug introduction.


>It's probably a lot cheaper
>to design the software to be simple and low-overhead to run on old
>hardware than it is to get the new software reverified for every new chip
>[and probably fancy connected architecture] that comes out (Forth has
>apparently excelled in these roles even though we mostly love it because
>it's easier for us to work with).

Fantasy.

This is the promise.

In fact everything gets rewritten and retested.

Management bids projects based on fantasy. Real engineers then
do something else ( to get more promised features, faster code, easier to debug
designs etc.)

>In situations like these performance
>isn't the problem, reliability is...I would think that in star fleet they
>would gladly use 10 year old [tried and true] chips (which, at the
>current rate, would be quite astronomically awesome).

Performance is always a problem.

The problem here is that aerospace engineers think they are smart enough to
do some really dumb shit.

Which is why new aircraft are always late and overbudget.

>         So what can we really learn from this?

Nothing. Your fantasy does not conform to reality.

It would be nice if it were true though.


>Well, I think one thing is
>to change from trying to surf the new wave to, rather, trying to stick
>with the old one until it's spent...can you imagine a surfer who caught a
>wave, then lost in immediately afterwards?  He'd never really learn how to
>ride a wave right if he never held it for more than a few seconds...you
>have to ride it to the end, I'm sure.  We certainly aren't ready to be
>using the new Pentium blah blah chips since we've barely tapped the
>potential of old chips (or new chips made with old processes, such as
>P21).  I think this should remind us that computing is in its infancy --
>right now there are things that people are just now realizing computers
>can do and they actually honest-to-god require high clockrates or FPU
>support to reasonably implement (i.e., some multimedia/interactive crap),
>so people are constantly jumping ahead to the newest hardware for good
>reasons (a game like Quake simply could not run on a slow processor)...but
>then once they have this fast hardware programs that don't need it use it
>anyways.  In a dozen years or so there probably [hopefully] won't be any
>more new things that everybody wants (such as video games) that can't be
>run except on top of the line hardware.  At that point people can slow
>down and make their choices based not on which brand new processor happens
>to have enough power for their program to run at all, but which one seems
>more sound or has the best reliability or lowest current draw.  I'm using
>the word "processor" but I'm really talking about the whole system --
>right now people chose C because optimizing compilers for it are so
>popular.  In a few years languages will be reasonably chosen not on how
>well they generate machine code but on how well the human interface is.




>         In some ways this does not look good for FORTH.  If the C++ fan's
>most favorite statement "It doesn't matter if it runs 10x slower,
>processors are so fast nowadays that ..." becomes true then it looks like
>C++ will be the future.  But that's not true either because it already
>usually doesn't matter right now if software runs 10x slower -- the
>problem is that C++ers think 10x but what really happens is 100x or
>1000x (because they build a 10x solution on top of a 10x solution,
>yielding 100x not 20x).  The type of idiocy that C++ers involve
>themselves in isn't likely to ever go anywhere.  However, there are
>high-levelish languages right now that I suspect could become popular
>if reasonable performance concerns went away (i.e., languase that tend
>to encourage code production that is a constant factor slower, rather
>than exponentially slower).  I wouldn't guess what would come out on
>top -- I would guess it won't be FORTH.
>         I think this shows strengths in MISC, though.  The strength of
>MISC chips is that they can be really great, even by today's standards,
>using yesterday's fab technology.  This means we're getting at new
>ideas, not just new hacks.  RISC got at new ideas -- it gave
>performance improvements on similar processes.  The Pentium didn't
>though....it borrowed a lot of ideas, for sure, but the only great
>new thing about the Pentium is the huge engineering task involved in
>hacking these millions of features into the same chip -- something
>that wouldn't've been possible until a process small enough was
>refined.  The MercEd definitely doesn't get at new ideas -- the only
>reason it didn't exist 10 years ago is because nobody had a 30+
>million transistor fab that could mass-produce cheap chips.  If other
>chip development companies had the same level of preproduction budget
>that Intel does, they would definitely be making better products.  MISC
>qualifies as trying to understand where we're at before blindly jumping
>into the future.  Intel is making bigger and fatter MercEd before they
>stop to think and look back at the Pentium.
>I can't believe that they don't realize that
>instead of investing billions into their new fab processes that they could
>simply investigate more thoroughly what can be done with their old ones.


Intel designs 50e6 transistor chimps with all their complications
because it reduces competition.

Any fool can design a 50K chip. These days.

You need multi-megabucks to do it the Intel way.

The Intel method maintains the value of Intel. Which attracts more $$$$.

Same for Microsoft.

Simon