Why Stop At One?

By Myk Dormer, Senior RF Design Engineer

First published in 'Electronics World' magazine

 

To start this article I must declare a preference: I like simple micro-controllers. I like low pin-count, unsophisticated low power devices. Processors with no more than a few thousand words of code-space, that you can program in assembler and hold the whole algorithm in your head at once, without your brains leaking out.

(I appreciate that there is an absolute need for highly complex, high performance devices with hundreds of pins and GHz clocks. After all, I’m writing this article on a desktop PC, and I’ll soon be despatching it as an email attachment, but in the niche-area of low power radio design, the simpler part is usually sufficient.)

In the design of a wireless module, the firmware is only required to perform some very limited tasks (frequency synthesizer PLL programming, transmitter power on ramp generation, maybe some data formatting). Things get more “interesting” however, when the job extends into the ‘user application’ arena.

At this point things can become very much more complex, very fast. Low power radio applications cover a vast array of different tasks, and the ‘user processor’ can be called on to perform many different operations: data stream coding, decoding and buffering (effectively ‘modem’ functions); power switching; analogue or digital input handling, from a variety of possible sensors and transducers; motor control tasks; battery maintenance tasks … the list goes on.

To take a simple example, consider a radio controlled ‘toy’ tank chassis: In the simplest case this is just the remote control of two DC motors. To give smooth handling, and realistic maneuvering, the motor control will need to be proportional. At the transmitter, input from a joystick will need to be interpreted.

Conventionally, one might consider lumping all the control functions together in a single processor. At the transmit end, this is reasonable (two A/D channels, and a simple data burst formatter is no great strain on even a simple processors). At the receiver, however, things are more problematical. There are several processor intensive tasks required simultaneously. Efficient decoding of a baseband data stream requires considerable CPU effort, especially if a bi phase, rather than edge detecting, decoder is used, while the PWM control of a DC motor is another processor effort-hog.

To realise these functions in a single processor will either require specific peripherals (some controllers include hardware motor control, or data interface, hardware) which limits choice of processor type, or will require considerably more processing power to be able to fulfill the decoder and PWM control tasks in parallel, which in turn will push clock speed up.

Sharing processor resources between multiple tasks is, in itself, a potentially difficult task. It requires complex coding, multiple interrupts, and possibly the use of a RTOS (real time operating system) to support the multiple concurrent tasks. Which, by then, will be complex enough to need to be coded in a high level language. This, in turn requires yet more processing power, and memory space.

The alternative is to identify the individual tasks (decode baseband data output from radio receiver, PWM control of left track-motor, PWM control of right track-motor) and dedicate a very simple processor to each task. The result is a three processor design, but one where each of the individual firmware functions is much, much simpler.

This is admittedly a very simple example, which a competent software engineer could actually code into a single processor without too much sweat, but it serves to illustrate the problem. With the addition of multiple control functions, the software complexity of a control processor can very rapidly spiral out of control, and a cost and power critical task can start to look as if it needs an industrial PC.

The alternative to a complex single master CPU is a multiple processor system. This is not a new idea, and is implemented in many industrial control applications already (most particularly in the automotive and aviation industries).

What I am specifically proposing is to divide the task up between a larger number of very simple processors at a much lower level than is usually done, before any one processor is required to handle more than a single job, or needs more than a few hundred words of assembly code.

As individual processor tasks are simpler, the data flow needed to any particular device ought to be proportionally lower, so allowing a simple low speed serial inter-processor bus (I favour simple asynchronous serial protocols, but synchronous methods like I2C are just as applicable) with dedicated higher speed data links to especially data-hungry sub functions if required.

The processors indicated for these distributed tasks are parts with between eight and twenty pins, and unit costs of around £1 or less. Beyond the basic CPU and memory, only a hardware communications device (a UART, SPI or similar) is really vital, and that is only to relieve the firmware of the need to handle concurrent communication and primary function tasks. Typical control functions such as PWM motor control or pulse coded servo handling can be easily coded in simple assembler routines, keeping one function per processor to minimise software complexity.

This approach goes against the current trends towards high level software applications, running on increasingly powerful platforms. By comparison this method can seem primitive, but it has some significant advantages:

  1. By running each control function on it’s own processor (once the inter processor communication bus has been defined) it is easy to develop and test the functions in isolation. In a large project, the tasks can be efficiently split up between team members.
  2. Each individual task is, in software terms, very simple. Extensive debugging with expensive tools should be unnecessary, and the likelihood of a hidden bugs cropping up is reduced.
  3. Functions are re-useable in future projects. A developed control task, and it’s processor, can be treated as a component for new designs. Expanding an existing design is also simpler, as it’s easier to add another device to the bus than to modify an already complex ‘master processor’ design.
  4. Each individual processor is a low power, low speed device. RF interference issues are minimised and power consumption is kept low (processors can even be switched completely off until needed)
  5. In a physically large project, the interconnection between different elements is reduced to power supplies and the low speed bus signal (much easier than trying to route multiple transducer inputs and control output cable skeins to and from a main processor card)

This method is not a panacea, and it has obvious limitations as the volume of control data increases or response time requirements become more critical, but it is well worth considering as a simpler, less expensive, greener alternative to the obvious “wire everything to an industrial PC card and write the application software in C+” solution.