home .. forth .. colorforth mail list archive ..

Re: RE: [colorforth] Forth and XP


From: Samuel Falvo <falvosa@xxxxxxxxx>

>>I would like to reformulate a question by Mark: how 
>>do you make sure that your tests cover all possible 
>>cases?

You don't. What you do is make sure that your tests
specify to you (as the programmer) what exactly you're
about to write. You also make sure that your tests specify
to you (as the customer) what you want the program to
behave like in specific situations.

Two different roles; two different types of tests.

Your question is a good one, but I'll need to explain
the different types of tests a little more before I
give a single-word answer.

>>For instance, I'm currently writing an inference 
>>engine. It takes as input a formula ( say, F->F) and 
>>it must answer yes if it is a theorem ( crash
>>otherwize :), and give it's demonstration. Here is 
>>more or less the spec.

>>How do you write tests for it?

There are two distinct types of test XP recognises;
therefore, there are two distinct answers.

The first answer is, "Describe some stories about what
the software will do. Write an automated test to lead the
software through each story." These tests are called
"acceptance tests".

The second answer is, "Pick an implementable feature 
that's needed by an important story, and write a test 
that describes that story's use of the feature." These
tests are called "programmer tests".

The question you were asking originally, though, was
how we make sure those tests have sufficient coverage,
NOT how we actually decide to write them. (Which is
the question I just answered.)

Your original question can be answered a little better 
now. The answer you're looking for is "feedback." 

The customer/expert gets to see the software as soon as 
any given feature is implemented, and if the feature's 
story includes interaction, can interact with it. The 
expert then gets to make a call as to whether the feature 
is well enough specified; if not, he adds another story
which narrows the specification of the overall program
(and has to decide how it fits into his priorities for
the program, since he can't get it implemented without
either stretching the schedule or discarding another
feature).

The programmer, meanwhile, sees all the existing programmer tests, and if what he's about to do looks different from
the existing tests, he knows he needs to write a new one.
If what he's about to do looks the same as an existing
test, he uses the code which is being tested by the test.
If what he's doing looks almost like an existing test,
he writes a new test for the old code, and thereby proves
whether the old code works for the new purpose. (If it 
doesn't work with the new test, he changes the old code
until it does -- keeping in mind that the old tests _also_
have to pass, so he doesn't break the code which was using 
the old code.)

>The specification is too vague.  You need to break it 
>down into smaller, more concrete chunks.

True -- but you, as the programmer, can't possibly give 
this answer. You know why :-); the customer needs to
be asked precise questions if you want to get precise
information. This "customer" has given a very general
problem statement, enough for me to decide that I wouldn't
want to implement it, but I can help a bit with the 
process.

>Samuel A. Falvo II

-Billy



---------------------------------------------------------------------
To unsubscribe, e-mail: colorforth-unsubscribe@xxxxxxxxxxxxxxxxxx
For additional commands, e-mail: colorforth-help@xxxxxxxxxxxxxxxxxx
Main web page - http://www.colorforth.com