Designing a wireless data link from the ground up can be pretty daunting, even if all the scary “radio” bits are hidden away inside a module. That’s why many manufacturers now offer complete “wireless modems”, where the customer’s RS232 stream (or frequently an inverted, logic level version of it) can be fed directly into the module interface, without worrying about all the coding, decoding, buffering, framing, synchronising and other things that are actually required to make a “radio” do something vaguely useful.
A lot of these devices are even “transparent”, in that they try to behave as much like a simple wired connection as possible (provided little restrictions like programmed baud rates and half-duplex turn-around time are abided by) so that the actual complexities of the air interface can be ignored by the engineers using them. Which is just wonderful. Until the device starts to be used in a real application (where other radios might be sharing the channel, or where spectrum occupancy, duty cycle or power consumption might matter). At this point, the way such a device operates, and the ways the user can optimise it’s behaviour, become far more important.
In order to send your data over a real data link, the modem device must buffer your data (so the radio on/off timings can be accommodated) add in checksums and framing sequences so the decoder has something to acquire, re-code the stream into a bit-level format that the noisy, AC coupled baseband path can handle (using scrambler-whitened phase coding, or something similar), and finally run the air-interface at a data rate sufficiently greater than the basic user interface speed, so that all the added overhead can be fitted into the timings.
Hidden in all this frantic activity is the innocuous word “buffer”. The modem stores the user data as it arrives, byte by byte and then must, at some point, define how many (user) bytes are going to be associated with each parcel of preamble, synchronisation, and framing. This gets called a “packet” in the industry, and it has an overwhelmingly important influence over the behaviour of your modem.
In the very simplest case, a packet could be one byte, but the resulting design would be terribly inefficient, as it would be adding at least 3-4 bytes of overhead to each byte sent (the simple short ranged wide-band TDL2A family devices approach this, with 3 byte packets and an air interface rate of 16kbit/sec for a user data-rate of 9600 baud, but these designs are optimised for low data latency (14mS), not efficient use of bandwidth. The very simple 1200 baud internal modems coded into some of the multichannel designs also fall into this class)
Most radio modems adopt a longer packet, usually somewhere between 8 and 256 bytes. They combine this with a mechanism for sending shortened bursts where the user’s average data throughput is significantly lower than the link maximum. Radiometrix’s M48A device falls into this category, with a 16 byte maximum packet length
It is at this point that a closer examination of how the modem operates is needed. The situation is very simple if data is streamed constantly at maximum baud rate. After the initial start-up sequence, packets will be transmitted end-to-end, with the actual packet length varying slightly as the transmit data buffer fills and empties.
If one byte is sent in isolation, the situation is at the opposite extreme, as the preamble/start-up sequence can begin immediately, followed by the transmission of a minimum length packet The time taken between a single byte entering the transmitter and appearing at the serial output of the receiver is the single byte latency of the modem, and will vary greatly depending on the switching speed of the RF hardware and the complexity and overhead in the packet structure. Practically it can be anywhere from about ten milliseconds to over a hundred.
Things become interesting when practical amounts of data begin to be sent (by which I mean infrequent bursts, between a few bytes to a few dozen bytes). In these cases a trade-off will be seen. If the modem has a long start-up sequence (or an over-long processing delay before the radio transmitter is activated) then all of a user’s data burst will be sent in one packet, but the latency (or delay) will be excessive. If, on the other hand, the data burst is long compared to the modem start-up (or the modem processes are unusually fast) then an initial packet will be sent (in a truncated form) before all the user’s data has been sent to the device and a second (and possibly a third) packet will follow the first before all the data has been sent. This will result in unusually long transmit sequences, possibly with the transmitter turning on and off several times, and a very poor ‘first byte in to last byte out’ latency time.
In the ideal case, the start up time for the modem would equal (or slightly exceed) the overall start-up time of the modem, so the last user data byte would arrive just before the modem commences to format the packet. In this instance the data will be sent in a single, optimally short packet, minimising latency times, spectrum usage and transmitter power use.
Unfortunately, this “sweet spot” moves relative to data rate, packet length and inherent speed of the modem’s internal processes, so users would be very lucky to find that a given device was already optimised for “their” data.
Fortunately the designers of wireless modems are aware of these problems, and the devices generally include one or more mechanisms to allow the user to optimise the timings to suit their application:
1. Handshaking signals. An obvious hardware solution is to provide an additional logic input to the device, which inhibits the modem’s start-up and packetising functions, while still allowing data to be loaded into the transmit buffer. The user asserts this input once all the data in a burst has been loaded, permitting the modem to complete it’s transmission operation. The M48A provides several of these “flow control” signals, most usefully a “buffer almost full” output (tx_flow) and a TX_INHIBIT
This approach gives optimal control over the burst transmission timing (allowing more complex synchronising and timing schemes to be used) and avoids complex re-programming of the modem device, but calls for extra user hardware.
2. Programmable user data rate. Many modem devices allow the user to modify some of the operating parameters (The M48 uses a break sequence, followed by a handshaking byte exchange to enter a “setup” mode)
The user port data rate can then be reprogrammed, to a speed much higher than the devices rated average throughput. If the user data is loaded into the modem buffer very rapidly (M48 supports data rates up to 76800 baud) then the entire user burst will always be loaded before the first packet transmission begins.
This method requires no extra hardware, but the user’s host device needs to have a fast enough data rate capability, and care must be taken to limit the burst size and aggregate data throughput, to avoid data loss through a modem buffer overflow. (for example, the transmit buffer implemented in the M48 chip is only 128 bytes and the average over the air data throughput is 4800 baud).
3. Programmable timing. Some wireless modems allow the user to change some of the fundamental timings of the device via a configuration interface as described above.
(Users of the M48 must note that these parameters are two digit hexadecimal numbers and that the preamble timing parameter is in units of 620uS, not milliseconds as with the other timings).
Typical timing parameters provided are:
Transmit start delay: (DELAY) This is the provision of a time delay between the arrival of the first byte in the buffer and the start of the modem start up process. This would be zero for single bytes or for continuous streamed data (to minimise latency) but for short bursts should be set approximately equal to the length of the data sequence (to ensure all bytes are in the buffer before the modem begins formatting the data packet)
Preamble length: (PREAM) The amount of time allowed by the modem for the RF link hardware to stabilise, before real data is passed over the path. This parameter is critical to correct operation: set it too short and the link will fail entirely, too long and the transmit bursts will be lengthened un-necessarily.
When setting these timers, it will be necessary to carefully examine the modem’s actual sequence of operations. Some designs format the packet after the preamble sequence (which allows the preamble time to be counted in with the transmit delay when optimising the timings) while others execute this function before preamble starts (in which case the transmit delay needs to be long enough on it’s own)
There is also a third parameter (transmit off timer TXOFF). This adds an extra delay to the end of a packet before a new transmission is allowed to commence. It is only really useful in bidirectional systems running sporadic data streams, where it may be necessary to inhibit transmission until a distant node has “replied” to a data burst, or in systems employing store and forward repeaters, where it gives time for the repeater node to go through it’s own packet transmit cycle before the next packet is sent (without imposing an increased initial latency, which would occur if a longer transmit start delay was used for the same purpose)
Taking the (small amount of) extra time to set the timing parameters to a best match for the user’s data format and throughput will greatly improve the behaviour of the link, and will reap real benefits in terms of reduced burst lengths, and hence improved power consumption and spectrum usage.
By Myk Dormer for Radiometrix Ltd
First published in Electronics World