By David H. Stannard
Advanced DFT Projects
Mentor Graphics Corp.
Edited by Leland Teschler
It should come as no surprise that Moore's Law of regularly doubling chip capacity is having an impact on automatic test equipment (ATE) for ICs. ATE, of course, applies patterns of signals and checks the response to make sure newly minted ICs are operating properly. But as the number of devices on a chip grows, so, too, does the number of tests needed to verify its operation. The more testing needed, the bigger and more expensive the equipment that provides it.
The increasing cost of testing often forces chipmakers into trade-offs between completely checking out chips and making bigger investments in ATE. Suppose, for example, that 10,000 test patterns are what it takes to completely exercise a new chip, but the memory on existing production-line pin testers can handle only half that. Without writing a check to upgrade equipment, the chipmaker is forced to either leave parts of the chip untested or test it in two passes. Users of the chip won't put up with the first option, and the second option generally takes too long to be feasible.
Designers are trying to put more testing facilities on the chip itself to address such concerns. Design-fortest (DFT) techniques aim to reduce the cost of ATE by letting this equipment handle several generations of ICs. This takes place by building test circuits into ICs and automatically generating tests through automatic test-pattern generation (ATPG). Recent enhancements of ATPG include embedded deterministic testing (EDT), a technique that incorporates testing circuitry on the chip in a way that can both eliminate the need to upgrade ATE memory and cut testing time. EDT, like regular ATPG, is based on scan path techniques. Its test patterns, moreover, are deterministic. It also produces highly compressed test patterns thanks to new algorithms that employ on-chip circuitry.
A quick review of the basic designfor-test techniques gives the kind of background needed to better understand how EDT works. The first digital ICs carried little or no testing circuitry on the chip itself. Engineers devised sets of input patterns that, when applied to the relatively simple logic on the chip, would make defects show up as deviations from the expected output patterns. Engineers graded the effectiveness of their test patterns by running simulations to check how many faults the patterns would detect (known as fault grading).
The most widely used fault model continues to be the stuck-at fault model. It uses the fact that physical defects such as metallization shorts, silicon contamination, and similar problems of-ten manifest themselves as a terminal on a gate that's stuck at either 0 or 1.
Eventually, chips became too large for the manual creation of test patterns to be practical. This ushered in scan technology and structured approaches to DFT.
The idea behind scan techniques is to make it easier to test circuits by transforming them from complex sequential circuits into simpler combinational ones. The approach is to convert existing the memory elements, such as D flip-flops or latches, into scan-memory elements. During testing these scan-memory elements are configured as shift registers, serving as an easy means for controlling and observing internal nodes of a circuit. The chip real estate overhead incurred for the conversion of memory elements into scan-memory elements is considered well worth the price for the resulting better testability.
There are several steps involved in applying a typical scan-based test pattern. First, the IC is placed in the scan or shift mode and the test pattern is shifted in from an ATE. Then the IC is placed in the capture mode and the response is captured. Finally, the IC is placed back in the shift mode and the response is shifted out and compared with the golden response stored in the ATE.
It is important to note that the number of memory elements in the longest scan chain determines the number of clock cycles required to apply a pattern. Furthermore, the scan chains are kept as balanced as possible so no chain is much longer than any others. To cut test time further, engineers merge the scan-out response for one test pattern with the scan-in of the next test pattern.
A point to note is that the input patterns and the expected output patterns used for comparison must be stored on the ATE memory. The bigger the chip, the more patterns needed. The more patterns needed, the bigger the tester memory.
One problem with the stuck-at fault model is that it is static. It does not verify that the circuit is free of speed-related defects such as delays caused by signal paths that are too long. Special tests check for such difficulties by initiating a transition at the beginning of a path and watching for the response at the end during a defined time window.
Built-in self-test (BIST) is a technique often used for testing large logic blocks. Test patterns get generated on-chip by a BIST controller (basically a pseudo-random pattern generator). Test outputs get captured on-chip by what are called signature registers. These compress the test results into a single signature pattern that gets compared to a golden signature to gauge the health of the tested circuitry.
What's notable about BIST is that it generates test patterns and evaluates test results on the chip itself. This eliminates the need for ATE memory that accomplishes the same thing. A problem, though, is that BIST patterns aren't deterministic; they may miss some faults and their circuit coverage is typically less than that with ATPG.
Furthermore, there is a debate over the ability of BIST to effectively accommodate additional fault models such as at-speed versions.
One way to boost BIST test coverage is to insert test points into the logic. However, test-point insertion is fraught with difficulties because it involves modifying the circuit. It also complicates the effort required for getting the desired circuit timing.
One recently developed approach reduces the volume of test data and production test time by implementing what are called reconfigurable scan chains on the IC.
Typically, chip designers create multiple scan chains that all shift in parallel, thus reducing the time needed to shift values in and out of the chip circuits. It is also becoming common practice to add a multiplexer in each scan chain. The multiplexer serves to connect two or more scan chains together, if need be, to form one or more longer scan chains.
Among the advantages of the resulting reconfigurable scan chains is that they can accommodate physical limitations of the ATE. For example, say a chip contained 100 scan chains. A production line tester would need 200 pins, 100 each for inputs and outputs, to test all chains simultaneously. If a machine is not available with this many pins (quite possible on the typical test floor), the multiplexers can be told to reconfigure into fewer but longer scan chains to fit the pin-count of available equipment. This technique is applied both at the wafer and package level.
A related technique called dynamic scan chains basically divides a scan chain into smaller segments and provides a mechanism to bypass some segments as a way of reducing test time and the number of test patterns. This method makes use of the fact that tests for most faults require relatively few scan values. The rest of the values on the pins can be "don't cares" (typically written as X).
In dynamic scan chaining, multiplexers are built into the scan chains at various points in a way that permits bypassing segments of the chain having Xs in the test patterns. The resulting smaller scan segments don't take as long to shift and use fewer test patterns as well. The trade-off, of course, is more chip area devoted to test circuits. In addition, the segments to be bypassed, and hence the placement of multiplexers, depends on the test patterns. Thus even minor modifications to the circuit design that cause the test patterns to change may force the multiplexers to be repositioned.
A new technology called embedded deterministic test (EDT) is another way of reducing test time (by up to a factor of 10) and data volumes. It effectively extends the memory of installed ATE without necessitating a physical upgrade. EDT does this by decoupling the chip-to-ATE interface from the number of scan chains.
In the ordinary scan ATPG technique, the number of scan channels between the ATE and the chip essentially equals the number of scan chains on the chip. With EDT, however, there can be up to 10 times more scan chains inserted in the chip than used by the ATE to communicate with the chip. Thus there is a factor of 10 fewer clock cycles needed to apply a pattern compared to the regular scan technique. This feature, in combination with the on-chip EDT logic, significantly reduces test data volume and test application time.
Physically, EDT takes the form of special blocks of logic on the chip. The Decompressor block sits between the incoming test signals from ATE and the inputs to the scan chains. The Selective Compactor block sits between the outputs of the scan chains and the outputs read by ATE. As their names imply, these logic blocks respectively expand test patterns coming from ATE into larger patterns that are actually applied to the scan chain inputs. The resulting scan chain outputs get compressed into a much smaller number of output signals that then get presented to ATE for examination.
Note that the Selective Compactor tolerates the unknown states and can gracefully handle them. Say, for example, a given chip has 160 internal scan chains and 16 channels are used to communicate between ATE and the chip. To the ATE, the chip would appear to have only 16 short scan chains. For each clock cycle, 16 bits would be applied to the Decompressor inputs. The Decompressor outputs, in turn, would load 160 scan chains every clock cycle. The reverse process would take place at the compactor.
Another complementary part of the EDT technology is a special technique that generates highly compressed deterministic patterns specifically for the on-chip EDT logic. This method transforms a sparsely specified internal scan test pattern into a highly compressed external pattern. The patterns that are stored on the tester are the patterns that are to be applied to the Decompressor and the response to be observed on the outputs of the Compactor.
EDT technology is based on standard scan/ATPG methodology. All of the EDT logic resides only in the scan paths. Functional paths are left untouched. The underlying deterministic pattern generation is capable of handling multiple fault models static as well as speed-based, and can generate high-quality tests without any intrusive test-point insertion. In addition, the pattern generation technique performs additional analysis to ensure that the Selective Compactor doesn't mask or alias faults.