Machine Design

Digital Signal Processors "Think" Analog But Work Digitally

DSPs are purposely built to make short work of complex calculations.

Digital Signal Processors "Think" Analog But Work Digitally

Associate editor

A basic block diagram of a DSP audio encoding, storage, and playback system. An audio source, such as a microphone, sends analog audio signals to the analog-to-digital converter (ADC). The ADC converts the analog signal into discrete numeric values, which are sent to the DSP processor. The DSP analyzes the digitized signal, and encodes those values using a compression algorithm similar to techniques used to compress digital images. The compressed data is then stored in memory. Upon playback, the data is retrieved from memory and a decompression routine restores the data to its original form. A digital-to-analog converter (DAC) changes the digital values back to an analog voltage that is applied to the amplifier and speaker. The algorithm used in the DSP is called a codec, for compression-decompression. The most common codecs today are the MP3 and WMA codecs, though there are many others.

Finite-impulse response filters act upon the input signal until the signal stops, hence the finite name. The digitized signal, SIN, goes to a one-sample delay ring, a simple shift register. It also goes to an accumulator, +, through a transfer function . As the signal passes through the transfer function , it is multiplied by transfer value 0 and added to the accumulator. On the next clock cycle, the sample passes through the first stage 1 of the shift register. The original sample is then acted upon by the next transfer function, 1, while the next sample is multiplied by the first transfer function. 0. Each result of the two transfer functions are added to the accumulator using a multiply-accumulate (MAC) function. The resulting output signal, SOUT, includes the accumulated values of all transfer functions. This process, known as convolution, results in an output signal that is the total output of all of the transfer functions. While this diagram only details a five-step shift register, true multiply-accumulate values may consist of over 50 such steps.

The infinite-impulse response (IIR) filter has an input side similar to the FIR filter. The difference from an FIR is that the output of the filter, SOUT, is applied back through another delay line and transfer functions to influence the input. Because the output signal now influences itself, filtering never truly ends until the output signal drops below a value where the feedback-transfer functions no longer affect it. The IIR filter is more difficult to design. Improper values in the feedback-transfer functions make the filter unstable and prone to oscillation, as happens with its inductive-capacitive equivalent circuit.

You'll find them in your CD player, radio, and cell phone. Motor controllers depend on them for efficient operation. They adjust the air/ fuel ratio in your car and trigger air bags in a collision.

These are just a few of the applications of digital signal processors, microprocessors optimized for highspeed number crunching. What sets DSPs apart from other microprocessors is the type of input signal they use. Typical microprocessors require digitized data that the processor operates on in chunks. Modern DSPs, in contrast, typically take in analog signals and generate analog signals as output. But don't be fooled; moving from input to output is a completely digital process.

The block diagram of a basic DSP is quite simple. An analog signal is applied to an analog-to-digital converter (ADC). The ADC samples or quantizes the signal into discrete numeric values that represent the signal. This digitized signal is then applied to the DSP core, where it is acted upon according to the programming algorithm stored in the memory of the DSP. A typical programming algorithm might be a digital filter to remove all frequencies above 3.5 KHz. The modified discrete values are sent to a digitaltoanalog converter (DAC), changing the quantized data back to an analog form. The entire algorithm must complete its calculations of multiply/ accumulate, add, subtract, or bit-shift within the time between samples of the ADC.

Why go through all this trouble? Wouldn't a simple discrete filter serve the same purpose? The answer to that is, "Yes, and no." Everything a DSP does has a component-level equivalent. But the use of digital techniques produces a process many times more efficient and effective. An electronic engineer would find designing a 50-pole low-pass filter to fit onto a 1/2-in.2 PC board impractical,-if not impossible. Yet a DSP performs that function with ease. The key is the speed at which the DSP operates.

DSP designs are geared for fast number crunching. A typical DSP system has four major internal buses. Separate address and data buses exist for both program instructions and data. Typically two address generators fetch data while another sequencer controls program execution. The arithmetic unit handles both the arithmetic-logic unit (ALU) and multiply/accumulator (MAC) in addition to a bit shifter.

Major operations of DSPs involve multiplying and adding numbers. MACs specialize in providing the means to do this. Multiplying two 16-bit values gives a 32-bit answer. That result is usually added to other 32-bit results, which might create an overflow condition if the holding register is only 32 bits long. To prevent overflows many 16-bit DSPs contain larger MAC registers — some as large as 40 bits.

Special registers within the DSP hold beginning and ending addresses for buffer areas in DSP memory. Because the address does not have to be computed each time, it's faster accessing sequential data from memory buffers. Circular addressing automatically wraps the buffer pointer to the beginning of the buffer after the last address is accessed. This happens without stealing time away from the processor's main calculation function.

Like hardware, DSP software is also geared for speed. Simple commands carry out complex processing functions. For example, once all the buffer registers are loaded, a single command fetches both signal data and multiplication factors and multiplies the two together. It then adds the result to the previous calculation and stores the total in the MAC. Meanwhile, the data address generators automatically increment to the next position.

Most DSPs incorporate a repeat function that affects other operations such as multiply/accumulate, block moves, I/O transfers, and table read/writes. When this repeat function is used ahead of these other commands, the commands become pipelined, executing over and over for the number of times specified in the repeat command register. During this time, the DSP does not respond to any outside interruptions until the repeat command finishes. Many repeated commands now take only one clock cycle per execution. A single table-read instruction, as an example, might take three or more clock cycles to execute. But if tied to a repeat command, a new table position is read every clock cycle.

The use of pipelined architecture is the key. Pipelining breaks the calculation process into individual hardware steps. For example, the addition of two numbers might take three steps. The first step merely fetches both values, while the second step adds the two numbers together. The final step places the sum of the addition in memory. If each step takes one clock cycle, then three clock cycles are required to complete each sample's processing. But, in pipeline mode, the next sample value is fetched while the first sample goes through addition. Then, on the third clock cycle, as the first sample's sum is stored, the second sample undergoes addition, while a third sample is retrieved from memory. Pipelined architecture thus provides a processed sample virtually every clock cycle.

All of this speed allows the DSP to operate on signal data in real-time mode, delayed only by the processing time of the DSP itself. The DSP emulates any analog circuit using that circuit's mathematical model. One such circuit filters various elements, such as whistles, clicks, scratches, or noise from a signal.

Designs today demand ever more complex filters. Increasing complexity also raises component sensitivity to temperature, manufacturing tolerances, and component aging over time. There is a practical limit to how complex an analog filter may become. In the digital world, values are stored as highly accurate delay elements and multipliers. Digital values won't change or drift over time or with temperature variations as do their analog counterparts. A 50-pole filter design is quite possible and readily done in the digital realm where, obviously, it would be totally impractical usinganalog components.

Two of the most basic filter types are the finite-impulse response (FIR) and the infinite-impulse response (IIR) filter. Finite-impulse response filters use only input signals to determine output. That is, the output is a product of the input signal and the filter-transfer function. Once the input signal stops, the action of the filter stops as well, giving it the finite tag.

When an IIR filter is used, not only is the input signal applied to the filter, but some of the output signal is fed back as well. This tightens filter bandwidth, but opens the door to filter instability and nonlinearity.

A typical DSP architecture includes two data address generators and an independent program sequencer, which keeps memory addresses and program steps in hardware. The four address and data buses separate data from program instructions, allowing data to be processed on one pair while the next instruction is acquired from memory. The entire structure is designed to process data extremely fast, keeping many values of the operation in hardware registers where they remain readily available for processing without requiring memory fetches.

A short history of DSPs

While the DSP is a fairly recent device, its operating principles date back to the 16th century. It was then that researchers started applying mathematics to real-world situations and began developing tools that help simulate real-world events as mathematical models. Performing the calculations, though, took more than a lifetime. John Napier passed away before completing the calculations for his book on logarithms. His friend and colleague, Henry Briggs, completed the work and published the book in London.

Faster math was just around the corner. Newton's calculus, Simpson's rule, and the Fourier and Laplace transforms all revolutionized the science of mathematically modeling real-world dynamics. While all of these techniques provided more efficient calculating methods, many calculations were still quite onerous, taking up to several weeks of intense number crunching. It took the computer to make these faster methods shine. Now the DSP effectively integrates these century old methods into real-time applications, performing calculations that required days in just a matter of microseconds.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.