home .. forth .. misc mail list archive ..

Re: F21 Network Coprocessor


Hi Dave,

The model you need in your head is a synchonous serial 
transmission where a sych signal is present on Ci to
provide the timing on each bit.   However it is async
based on the first bit transition.  The nodes operate off
the same Ci.  So if node o outputs to node i it will
begin by sending a 1 bit sychronized to ci 0 to 1. the
i node will start clocking and shifting in bits at start
send, and will read in the first data bit at first read.
The subsequent reads are synchonized by ci.
 
> ci |---|   |---|   |---|   |---|   |---|     input clock on Ci
>    |   |---|   |---|   |---|   |---|
> 
> n1 |-------|   0   |-------|   0   |---  /1  1 input clock per
>    |   1   |-------|   1   |-------|         encoded bit
> 
     ^ start send
         ^ first read
             ^ second send
                 ^ second read
                     ^ third send
                         ^ third read ...

In this example the 11 bit divide-by register is set to one.
In this example I am showing 10101... (binary) begin transmitted.
In fact an entire token is the start of message.  So
besides the bit read/write and shifting which is sychcronized
by Ci the word justification is sychcronized by the SOM,
start of message, token when the shifted bit pattern
matches the value in the SOM register.  SOM contains 8
bits that operate as the node address so individual nodes,
or groups of nodes can be set to allow sending to individual
nodes or to broadcast to groups of nodes with one message.

Maybe that is clearer. ;-)

(On the high end of things:
Due to small differences in signal lengths different nodes
could have a small amount of jitter present on the timing
sychcronization signals.  The circuit on the I/O pads
at the present time limit the speed of Ci and serial bit
rates.  Internally the unit can operate at gigabit speeds
and with differential amps and signal levels on the I/O
and some small changes to the internal timing signals
we can push close to that.  At those rates you need to
only transmit patterns that have at least one bit
transistion per five bits to prevent the possibility of
jitter errors.  

It is also assumed that in most cases there will be some
form of checksum at the level of network code support
to detect and correct or request retransmission in the
presence of hardware errors.  Up to a few megabits per
second things are so slow that the CPU could read Ci
and bit bang multiple serial signals on the pport pins.
Above 10mbps it becomes impractical to bitbang and
so there is hardware to do the shifting and timing of
serial bits with only DMA overhead.  (not counting the
CPU overhead involved in pre and post processing of
the serial data that is needed.  pre and post processing
such as checksum calculation on packets by the CPU can 
take place in parallel with the serial data being sent/
received by the coprocessor.)

Jeff Fox

Dave Lowry wrote:
> 
> Thanks, Jeff, for your reply!  But, I'm still confused :-(
> 
> The model I have in my head resembles regular asynchronous serial
> reception.  That is, after the start bit edge, we count 1/2 bit time
> to get into the center of the bit and then do a sample, then count out
> full bit times and do samples in the center of the remaining bits.
> 
> I'm guessing that's the purpose of the counter, to get centered in a
> bit to get a clean sample.  Maybe I'm not understanding the encoding
> scheme.  Could you give an example of a waveform for a short bit
> sequence?
> 
> -Dave