Digital signal processors are becoming widely used in more products from cellular phones to industrial digital motor controls. And there appears to be no end in sight. The recent explosion of high-speed communications and the advent of multimedia has created a need for faster, more powerful DSPs capable of handling the enormous amount of voice, data, and video information zip-ping around the planet, virtually at the speed of light.
DSPs have come a long way from their earliest implementation on mainframe computers and single-board systems to the point where it was possible to physically and economically squeeze what formerly took up an entire circuit board into a single IC package. However, at each of these levels, the distinguishing feature of a DSP is its power to perform complex mathematical calculations at high speeds. The fact that a DSP can quickly execute instructions means it can keep pace with inputs and supply output data without latency, making it essentially a real-time processor which is critical in machine control and voice and image applications.
A DSP is a lot like a microprocessor or microcontroller (MCU), but there are some fundamental differences. Chief among these is the core architecture and the way in which the address and data busses are structured. DSP cores typically use a Harvard architecture, which consists of separate address and data busses. A traditional micro-processor or MCU uses the familiar Von Neumann architecture which shares data and memory busses. "The big difference between sharing and not sharing buses is how fast you can move data into and out of the part," says Rich Hoefle, applications engineering manager for Motorola's DSP Standard Products Div. A separate bus structure provides an unrestricted flow of data in and out of the chip because data is not shared with memory and other internal processor tasks. The less obstruction to data flow, the more data can be processed and the more efficient the DSP runs.
The efficiency with which a DSP executes mathematically intensive tasks is the other difference between it and an MCU. Central to every DSP core is a dedicated hardware multiplier called the multiply and accumulate, or MAC, unit. It takes two numbers, multiplies them, then adds the product to the previous result. A DSP's power lies in the speed with which it can execute this instruction in the MAC only one clock cycle. A traditional microprocessor, on the other hand, would require several clock cycles to execute the same instruction, breaking each part of the multiply and accumulate operation into separate instructions and using precious clock cycles. "It may take a few microseconds to do a multiply in an MCU, whereas in a DSP it may take 20 nanoseconds or less depending on the clock rate," explains Aengus Murray, at Analog Devices' DSP division. Murray adds, "While an MCU may not be able to do fast calculations, it is efficient at processing and moving data around. An MCU is sufficient for PC applications and manipulating tables. But for fast signal processing the DSP has the edge."
What is happening now in the DSP world is the cross-breeding of DSPs and MCUs. The hybrids that result contain features of both and are being used in applications that were formally the domain of one or the other. "Hybrids represent one big growth area in the industry," says Motorola's Hoefle. "The hybrid gives you the best of both worlds. It has the benefits of an MCU, as well as the number-crunching capability and the bus structure of a DSP." For instance, designers of 8 and 16-bit MCUs are considering adding hardware multipliers to their MCU cores. While not able to match the processing power of a straight DSP, such hybrids could find acceptance in applications where less processing power will do. High-end applications that require fast signal processing will still use traditional DSPs.
On the DSP end, similar changes are under way. "What we're seeing, particularly in the motor-control arena, is an integration of peripherals around the DSP core," Murray says. Common peripherals such as analog-to-digital converters and drive circuitry for the power stage have been integrated into a single chip. Doing so eliminates the need to put these peripherals off chip, saving cost and board space. Murray notes that the trend among DSP designers today is to place everything that operates at 5V or less onto the chip. Five years ago, it was common to see a PC board with a DSP chip, external memory, ADC, and ASICs for the PWM calculation. Now, all of this has been put on one chip, and the trend is clearly toward a system-on-a chip (SOC) approach to design. Also, the addition of watchdog timers, reset circuitry, and flash memory have made DSPs an attractive choice to designers familiar with MCUs.
Cost is another factor behind the growing interest in DSP. As the cost of silicon drops, DSPs are beginning to move into applications that at one time were MCU territory only. Not long ago, the perception was that DSPs were expensive devices used only in high-end systems; designing a washing machine, refrigerator, or an industrial drive with a DSP was just too expensive. Not any more. According to Murray, an engineer designing with a 16-bit MCU has to ask, what is my competitor doing with a 16-bit DSP that adds features that I can't or don't have? And with DSP prices going down, a DSP might deliver more processing horsepower than an MCU.
Perhaps the biggest growth area for DSPs displacing MCUs is in motor controls. Multiphase motors found in products ranging from washing machines and air conditioners, to industrial motor drives, fans, and pumps are starting to take advantage of the computational power of DSPs. U.S. Dept. of Energy studies have shown that digital motor control can reduce energy consumption by as much as 50%. And in an age of environmental and energy concerns, such news has spawned an increasing demand for energy-efficient motors. As a result, OEMs are steadily introducing DSPs into motor controls. "An engineer can design a digital motor controller with a DSP that costs as little as $2," says Leon Adams, DSP manager at Texas Instruments. Many of the fundamental algorithms used in motor control such as pulse-width modulation (PWM) and PID can easily be implemented in a DSP. In addition, much of the hardware required for sensing can be eliminated. For instance, running a predictive algorithm, the DSP's mathematical processing power can replace external sensing of motor position. "Through the current sensing that you'll have from your power supply and the predictive algorithm, you can figure out where the motor is in its state and its spin," says Adams.
So while DSPs are beginning to make inroads into motor controls, they continue to be essential to the ever-expanding field of communications and multi-media. A DSP's ability to efficiently crunch numbers makes it the ideal choice for implementing many of the algorithms used in the field of communications, especially algorithms for filtering noise from signals. Implementing a finite impulse response (FIR) or infinite impulse response (IIR) filter in a DSP is an easy thing to do. Likewise, fast-Fourier transforms (FFTs), which analyze the frequency content of a signal or system, are one of the most common algorithms programmed into a DSP.
"There has been a burst of activity in networking," notes Hoefle. The onslaught of pagers, cellular phones, and two-way communication devices has increased the demand for faster DSPs. Voice-over-Internet applications also require intensive real-time data processing. And with the addition of data and graphics capability in por-table communication devices such as PDAs, DSPs will have to crunch more data in less time than ever before. Hoefle adds that Motorola and Lucent Technologies have formed a joint venture called StarCore which will produce DSPs for multi-channel communications systems which includes wireless base stations, remote access servers, and digital subscriber lines (DSL). These DSPs will run at a blazing 300 MHz with four MAC units and four ALUs.
One of the biggest challenges for the DSP industry has been over-coming the perception that integrating a DSP into a system is hard work or requires extensive knowledge of DSP theory. In the early days of DSP, this was often the case. "An engineer had to have a clear idea of the mathematics involved in DSP theory, then be able to translate those ideas into an algorithm, and finally program the chip in assembly language," explains Hoefle. Now, with the trend toward writing code in high-level languages such as C and C++, programming a DSP is not what it used to be. In addition, many companies offer software packages that aid in designing and simulating DSP systems. After simulating a DSP design, the software can generate real-time C code from the simulation.
Oddly enough, it is at this point that a bottleneck of sorts arises. While programming in C is favored over assembly because it is faster and more straightforward, the problem is that the DSP core architecture is not optimized to run C code. This means that the advantage of the DSP's fast processing speed is effectively lost. "For example, a standard C compiler will take a single multiply and accumulate instruction and break it up into two or three instructions," explains Murray. So what formerly took one instruction now will take two or three. The challenge now is to develop C compilers that generate code which take advantage of the DSP core's efficient processing, providing the best of both worlds.
Some DSP manufacturers have redesigned the DSP core in order to allow the devices to be programmed in high-level languages such as C. For instance, the Star-Core DSP core has been designed to run C code efficiently. Likewise, at Texas Instruments and Analog Devices, development tools are becoming easier to use and offer the choice of using C or Assembly.
Ease of use and portability will be critical to integrating DSPs into more products as the scope of DSP applications continues to expand. As Adams points out, "The convergence of computer, consumer, and communication products, facilitated by the Internet and wireless communications, has highlighted the importance of programmable-based solutions. Flash-memory technology, which allows you to add features quickly and change features once the product is in the field, will be essential." Also, the ability to reuse code will save designers from reinventing the wheel each time a new design is required.
Mips AND Mflops DEMYSTIFIED
A DSP's job is to perform mathematical calculations at high speeds; in other words, to crunch a lot of numbers. The standard indication of a DSP's number-crunching capability is its rating in terms of the number of instructions it executes per second.
"DSPs fall into two categories, fixed-point and floating-point," explains Gerry Maquire, product line director of Analog Devices' DSP division. A fixed-point DSP scales all signals between the values of 0 and 1 using 16-bit notation. For instance, with 16 bits, a value between 0 and 1 is 12 -n , where 0 < n < 16. On the other hand, a float-ing-point DSP deals with numbers much in the same way as scientific notation. A number is represented as a binary mantissa multiplied by a binary base raised to some power. For instance, the number 1.5 in floating-point notation would be 112 X 2 1 , or in decimal 3 0.5 or 1.5. Floating-point notation allows a greater dynamic range for representing numbers.
Fixed and floating-point DSPs each have their own benchmark. For fixed-point DSPs, the yardstick is millions of instructions per second, or Mips. "This translates into how many fixed-point multiply and accumulate (MAC) instructions the processor can execute in one second," explains Maquire. A floating-point DSP is rated in terms of millions of floating-point operations per second, or Mflops.
A typical rating today may be around 160 Mips. A problem arises when pipelining, or parallel instruction execution, is introduced into the picture. Maquire continues, "If the DSP can simultaneously execute a MAC, two data fetches, two pointer updates, and a serial port operation, then you're doing more than one instruction. In this case, you're doing six, so your Mips will increase by a factor of six." Likewise, if the DSP is running at 300 MHz and can execute eight instructions in one clock cycle, you now have a 2,400-Mips machine.
To avoid confusion, another benchmark has been gaining ground, the MMACS or millions of multiply and accumulates per second rating. This benchmark zeroes in on the fundamental operation of a DSP, which is the multiply and accumulate function.
The bottom line for specifying a DSP is to look closely at not only the Mips but also the processing speed the application calls for as well the algorithms the DSP will need to execute.
USEFUL WEB SITES
For more information on DSPs including general questions, product family information, and development tools, check out these web sites:
www.bdti.com/faq/dsp_faq.htm A site that will answer all of your questions about DSPs. Also, links to other DSP sites.
www.eg3.com/index.htm Focuses on embedded DSP applications including industrial computing.
http://electronics.about.com/industry/electronics/msub_DSP.htm DSP articles, industry events, leading suppliers, publications, and benchmark scores for some of the top DSPs on the market.
http://www-dsp.rice.edu Rice University's DSP home page contains papers on DSP research, wavelet processing, and filter design.
www.dspworld.com/icspat Information on the International Conference on Signal Processing Applications and Technology.
www.analog.com/dsp Products, applications, and solutions.
www.motorola.com/SPS/DSP DSP products, documentation, and so forth.
www.ti.com/sc/docs/products/dsp/index.htm Platform and product overviews and DSP news.
www.mathworks.com/products/applications/dsp/index.shtml DSP development tools including simulation, prototyping, and demos.