I'm stuck at home with an severe outbreak of my comedy midwinter hayfever so have thought about stuff a bit more
TL;DR version
AWOOOGA, AWOOOGA, NERD ALERT, NERD ALERT, AWOOOGA AWOOOGA
Technical versionBit of back of smog packet claculation since my model is :-
a) Not cycle accurate (or even cycle aware
)
b) In the deleted items folder due to a rethink
Give each PDM 16 inputs max with 12 bit analog resolution based on PIC capabilities, Max 256 on a single network (4K total inputs
)
For every input each pdm needs to send a status packet roughly like
Code:
| address | Data (16 bit) |
{xxx,(pdm num)}{function(4),Value(12)}
Major Priority Groups denoted using upper 3 bits to achieve quickest conflict resolution (Assume 0 = highest pri, 1=lower)
Code:
Error :- {000}XX
Highest (config) :- {001}xx
State Change :- {010}xx
State :- {100}xx
In round terms each packet is 7 bytes:-
3 data/address,4 protocol
So each PDM sends 16*7 bytes = 112 bytes
112bytes = 896bit/pdm
Say bus is 250kb/sec
each pdm uses ~3.6mS
Call it 5mS to allow for a couple of higher priority state change messages
You end up in a 256 PDM system with a native refresh rate of 1300mS but cut it down to a more likely 64 pdm limit (or possibly a more useful 32 pdm but 32 inputs) and you get < 500mS native refresh.
I've left out the possibility of all PDMs reporting back their status as it just doubles the refresh time anyway
and there's no allowance for state change on outputs - the protocol is more than robust enough for the benign vehicle application with a 15 bit CRC and the slave can report any errors on output switching/normal refresh