Seven questions

7 Questions Companies Should Ask Before Using a New Technology

July 16, 2020
Here’s how to use Design Technology Readiness Levels to see if a technology is mature enough to use in a new product.

Products are, at their core, a marriage of technologies. Some technologies provide functions and features while others let you make the hardware, assemble the system and write software. To deliver a good product, all technologies in it must be sufficiently mature.

One way to ensure this is to determine each technology’s Design Technology Readiness Level (DTRL). This tool uses seven questions that are answered by individuals in the company responsible for designing and building the new products. It lets all involved with product understand and talk about the problems involved and how to possibly solve them.

Intro to DTRL

NASA developed the Technology Readiness Level (TRL) scale back in the 1970s as a consistent measure of a technology’s maturity. It is now widely used by the DoD and many large organizations to communicate a technology’s level of maturity (development). TRL is good at measuring a technology’s maturity or usability for a project, but it’s not well-suited for product design.

I updated the TRL assessment process into the DTRL method when I worked with a consulting customer who had trouble communicating with engineers and the other business units. We also wanted to use it improve all communications within the design team—always a desirable goal. The results also gave us a way to identify and communicate risks to the whole team, regardless of their backgrounds or expertise, during product development.

For DTRL, “technology” refers to any science, technique or process that may be used in a product or to make it. Technologies can be functional, manufacturing-related or encompass any other critical topic requiring time and effort to develop before a product goes into production.

DTRL should only be applied to critical or new technologies in a product. It is not helpful for mature technologies such as sheet metal forming, metal finishing and manufacturing PC boards. The exception to that is if it relates to a type of forming, finishing or manufacturing that is new to the organization or its vendors, or else requires tighter tolerances or other substantial changes.

If a product depends on several new or critical technologies, each should be assessed. This method is also useful when choosing between several technologies as alternatives for a single feature. In such cases, DTRL assessment help uncover information that can help the team decide which to choose.

Finally, this scoring system can be used to assess the product’s development challenges before a project begins. It can then be updated periodically as the project progresses.

The assessment is based on the answers to seven questions. It is suggested that all responsible parties for development and manufacturing answer the questions. The results then serve as a basis for discussions on what changes to make. The survey can then be done again until a consensus is reached.

The process should be as follows:

1. The team identifies technologies that might be used in the product to determine which are immature, risky or new.

2. Responsible stakeholders score each technology they are accountable for or have knowledge of. This includes not only team members, but others in the organization and vendors with a stake in or knowledge about the technology.

3. During a one-hour meeting, consensus is developed for each measure. If consensus cannot be reached relatively quickly, there may be:

  • Uncertainties that should become evident during the assessment.
  • An inconsistent definition of the “technology” being evaluated.
  • A need to break the technology into several sub-technologies. (This process makes these challenges evident.)

The Seven Questions

Here are those seven questions and an explanation of each:

Question 1: What is the technology maturity in the organization?

This first question measures the company’s experience with the technology. It is based on a nine-point scale (to be consistent with the original NASA scale).

For this question, it is often best to start at the bottom, assuming a score of 0, and work up until you find a statement that describes the company’s status regarding the technology. A score of 9 addresses the group or design team directly involved, and levels 8 and below are focused on the entire company. Each measure has two wordings. The first is more general, while the second is geared toward the level of prototype or product testing.

If there’s a problem assessing a technology, consider breaking it into sub-parts or redefining the “technology” being assessed.

The goal here is to not only score the technology, but to also include each respondents’ reasoning behind his or her answers. It should include assumptions, specific knowledge of the technology and references to prior projects, products and literature. These will be important when trying to build consensus within the team.

Question 2: What is the technology maturity of vendors and consultants?

An organization’s knowledge about a technology can be extended by using vendors, consultants and other partners. If technology maturity in the organization (according to the first question) is less than 7, the partners’ capabilities can come into play and increase the effective maturity. Note that even though only four values are shown, any number from 1 to 10 is possible.

This outsider’s maturity is not as valuable as direct organizational experience and should be discounted. For example: If the technology maturity in the company is 4 (as per Question 1) and a vendor is at level 8, the updated DTRL might be 5 or even higher, depending on the relationship with the vendor.

Question 3: Do we know how we will validate the technology’s maturity level? Will all critical variables be tested over all realistic conditions (validation)?

A technology can only be mature if all the important parameters that characterize it are known, as well as how to validate that those parameters meet tests showing it will work in the application.

The goal is not only to get a “validation,” score but also the reasons for that score. Scores below 10 lower the DTRL, as well be discussed later.

Question 4: Are the interfaces with adjacent sub-systems known and stable (interfaces)?

No technology works by itself. It must work with other parts, assemblies, inputs and sub-systems. How much is known about these interfaces affects the knowledge of the technology. When scoring or discussing the interfaces, think in terms of form and function. How well does it physically fit in the product along with the other parts and assemblies? From a functional standpoint, how well understood are the flows of information, energy, materials and controls of the interface ?

Question 5: Are the expected manufacturing methods and tolerances needed to make the technology work similar to what the company already has (manufacturability)?

A technology also has to be carried out by hardware or software, and it must be manufactured or written. This measure assess both in-house (make) and vendor manufacturing (buy) capabilities. And as with all the other measures, those making the assessments should note their assumptions and sources of information.

If only well-known manufacturing methods are used, this measure does not affect the maturity assessment. If the methods are only dimly understood, however, it will likely reduce the overall assessment. If the technology being assessed is a manufacturing process, ignore this measure.

Question 6: Are the design specifications sufficiently complete, stable and up-to-date (specifications)? 

It is important to assess how the technology will be used, as reflected by its specifications. This assessment does not imply a specific source for the specifications. They may be imposed on the design team or developed by it. Either way, some minimal sense of specification maturity is needed to ensure a technology will be suitable for the application. The design team needs to determine this minimum level.

Questions 3, 4, 5 and 6 address the VIMS (Validation, Interfaces, Manufacturing and Specifications). For scores less than 10, the DTRL can be reduced, as will be shown.

Question 7: Is there high confidence in the answers to the first six questions?

Independent of the six measures above, but of equal or possibly greater importance, is the confidence (i.e., certainty) in the answers given.

Confidence is used as an independent variable from the other six measures and is an assessment of the knowledge about all of them. Teams can assess the confidence for each measure, but that would include yield too much detail and is probably not worth the time and effort.

To get a good handle on this, each team member should ask themselves:

  • Do you know how sensitive the technology is to outside conditions?
  • Do you know the technology’s failure modes and effects?
  • Do you know how to control the technology throughout the product’s life?
  • If asked 100 questions about the technology, could you answer most correctly?

If a team member answers “yes” to all of these, they should consider themselves an expert and select a high confidence level. If weak in some areas, then lower the confidence assessment.

DTRL in Action

Here’s a hypothetical example of a company using DTRL.

Barbara and Belinda are members of a design team at Neutranzics, a company that manufactures consumer products. For its new project, it is considering using a radio frequency transmitter.

Barbara is the product owner who has been with the company for seven years and led many products to release. Belinda is a recently graduated electrical engineer who has taken several courses in RF applications.

They independently assess the RF transmitter to determine its DTRL. The comparison of their results is shown above. Comments for each measure explain their reason for the assessments.

There are options for how the Neutranzics design team uses this information. At the very least, the results should lead to discussion about validity testing and interfaces, areas they disagree on.

Some managers want a single number for the score. The simplest way to do that is to average the scores from Questions 5 and 7. But it’s best if companies look at all the scores and work on areas that scored poorly. Averaging results hides the areas that need work.

In this example, Barbara and Belinda are in close agreement. Barbara has concerns with validity and interfaces. If discussions do not resolve these concerns, then from Barbara’s viewpoint, her score may be lower.

The exact score is not as important as the fact that answering the seven questions has given Barbara and Belinda’s team a glimpse into the RF transmitter’s maturity for their new product. Further, this glimpse can be widened by talking with others outside the team, especially those who have dealt with interfaces and validity testing for the RF transmitter.

Based on what has DTLR uncovered, the risks in using the RF transmitter in the product are not high. The specifications are fairly well developed and manufacturing is not an issue.

This exercise may not have developed any new analytical results, but it has made important aspects of using the RF transmitter evident. If the team had been exploring an alternative technology for use in the product, an option B, the assessment of both technologies would provide a clear basis on which to compare and contrast them.

David G. Ullman is a retired design professor, an ASME Life Fellow and an author. His text, The Mechanical Design Process (6th edition), explains best practices for getting from need to product. For more information on DTRL, go to www.mechdesignprocess.com/dtrl. Feel free to contact him with comments or questions at [email protected].

Sponsored Recommendations

MOVI-C Unleashed: Your One-Stop Shop for Automation Tasks

April 17, 2024
Discover the versatility of SEW-EURODRIVE's MOVI-C modular automation system, designed to streamline motion control challenges across diverse applications.

Navigating the World of Gearmotors and Electronic Drives

April 17, 2024
Selecting a gearmotor doesn’t have to be a traumatic experience. The key to success lies in asking a logical sequence of thoughtful questions.

The Power of Automation Made Easy

April 17, 2024
Automation Made Easy is more than a slogan; it signifies a shift towards smarter, more efficient operations where technology takes on the heavy lifting.

Lubricants: Unlocking Peak Performance in your Gearmotor

April 17, 2024
Understanding the role of lubricants, how to select them, and the importance of maintenance can significantly impact your gearmotor's performance and lifespan.

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!