home .. forth .. colorforth mail list archive ..

[ColorForth] Philosophy (part 1 of 3)


Sorry, I hit send on part 1 while editing so
I sent a mess to everyone.

> One of the most profound things I found was Jeff's 
> ability to make a roughly ANS standard forth from a 
> minimalist forth in only a tiny amount of code. 

I did take an eForth, ANS, make a few changes, and
compile with a MachineForth for F21 and got
a couple of K.  A few more hours to get a better
mix.  The eForth code compiler is larger than
the MachineForth compiler but doesn't do the
optimizations so a slightly different mix 
would give a smaller ANS similar to eForth
but also with the optimizing native code
compiler for the target chip in a couple
of hundred words.

You just whiz ahead learning stuff when you 
work on tiny things where we are dealing
with a line of code and a couple of target
words on a process that you can understand
in a few minutes while to solve "exactly"
the same general problem other people have
to so much more work and since most of it
is fixing problems that get attached in the
process you don't whiz and you are lucky
to learn a thing or two every few months
instead of something amazing every day.

It was the most fun I ever had with computers.

Forget all the stuff about billions of internet
computers, of whatever if you like.  Fun is also
an essential component or it isn't Forth and
if it isn't fun don't do it.  I just want to
let you know that whizing along learning something
amazing every day is so much fun.  Real Forth is
fun Forth.

I think that chips and machineforth was the most
fun for me because it was the newest thing for 
me, and simplest we could to, and the most fun
to be had.

I think that doing the same sort of thing today
but with a colorForth should make it simpler
again and even more fun.  I hope that not
too many people focus on Pentium too much
or too long.  There are lots of Pentium
programmers and not many interested in 
programmer Chuck's computer designs
which we think is so much more fun.

I also get very enthused by this list.  This is
far more interest that I have in years in
any of this stuff.  It is exciting to see it
and I am just trying to help people get the
most fun as benefit as I think they can.
It is just my opinion of course.

 -1) Personal Time (removing illusions about what you can't do) 
 
> 0) Programmer Time (removing code completely by not coding it)
> 1) Edit Time (pre-processing code to make compiling faster)
> 2) Compile Time (remove references and variables and hard code values)
> 3) Run Time (do work that can only be done here)

> If the compiler doesn't need to recognize literals, or words, 
> or variables, or comments, or strings then it can be faster.

Yes, all the string parsing goes to edit time.  The improvement
on numbers is the greatest.  They go from the slowest thing
to the fastest thing.

> Why check if a token word
> exists before checking if the token is a number? 

And then convert it from ASCII to a number character
by character when if you have it as binary number
you move it.  The editor shows it the same way.
It is often smaller when stored as binary too.

> Why not have the
> programmer and/or the editor mark the token as a constant. 
> Taking it further, mark the token as a constant and store 
> it as a binary value for faster compiling (which is what 
> Aha does in my understanding).

Yes, Aha uses a database record format for the source
and each record is typed, strings parsed etc.  Aha
and Flux also build the name dictionary at edit
time as different record type and packages the pre-parsed
names that way. So dictionary searches also are not needed 
at compile time with that approach.
 
> The idea of compiling routines as you need them 
> makes more sense when you
> have an incredibly simple and fast compiler.

Which opens the door to one of the most powerful
techniques, one used by some ANS programmers,
solving a class of problems by generating
executable data strutures at runtime.   They
merge the instances of data to be processed
with the code to perform the process you are
doing more efficiently than in the traditional
way of using a routine and an array of data.

We used it at iTV for decoding the huffman
compressed boot code and decoding jpeg
and it is very powerful.  But you need a
compiler at runtime.  If you have large
data sets you need a fast compiler.  The
compiler must be free of distribution 
restrictions to do this with an application
that does it.  Some applications can use
a commercial compiler to do it by letting
you distrube a feature restricted version
of the compiler, but not all.
.
------------------------

To Unsubscribe from this list, send mail to Mdaemon@xxxxxxxxxxxxxxxxxx with:
unsubscribe ColorForth
as the first and only line within the message body
Problems   -   List-Admin@xxxxxxxxxxxxxxxxxx
Main ColorForth site   -   http://www.colorforth.com