Thinkstock
Artist's redition of digital twins

How the Digital-Twin Concept is Shape Shifting

June 21, 2021
The architect behind the concept of a digital twin describes success factors in dynamic decision making.

At a Glance:

  • Dr. Michael Grieves, chief scientist for Advanced Manufacturing, Florida Institute of Technology, explains the premise behind digital twins.
  • Digital twins use cases are in their early stages. More research is needed before claims can be made of a cohesive digital twin that exists across the all disciplines and the product lifecycle.
  • The benefit of building a digital twin is the effective use of information becomes a substitute for wasting physical resources, such as time, energy and material.

When Dr. Michael Grieves first presented his visionary research of a conceptual digital model underlying product lifecycle management at a conference in 2002, he had the inclination it could potentially drive precision manufacturing.

“We had this underlying premise that we now have two things; one is the physical product, and the other is the digital representation of that product,” said Grieves, chief scientist for Advanced Manufacturing, Florida Institute of Technology. “The key was then making sure we could connect the two and use data from the real world into the virtual world, and then use that information from the virtual world in the real world.”

A digital twin is a virtual representation of a piece of equipment across its lifecycle. Although the application and purpose of digital twins differ from one application to another, data from sensors are typically used to map and analyze how the object responds to the physical world. Statistical and mechanistic modeling can be used to simulate, monitor, diagnose, predict and recalibrate outcomes of a jet engine, a physical plant, a city or even a living heart.

The pay-out, according to Grieves, was that the effective use of information became a replacement for the probability of wasting physical resources, such as time, energy and material. It generated efficiency across the product lifecycle, from the design and manufacturing, to the operating and disposal phase.

Grieves cannot lay claim to naming his innovation, but he certainly can take credit for the technologies it inspired and the value it continues to deliver to the improvement of manufacturing processes and operations. (Bragging rights for coining the term “digital twins” go to NASA principal technologist John Vickers, who referred to it in his 2010 Roadmap Report.)

“I was a little ahead of my time,” said Grieves. “But I was confident that computing capability would continue to advance its exponential rate and, eventually, it would catch up. I think we’re starting to see that right now.”

In the following Q&A, Machine Design’s senior editor Rehana Begg spoke with Grieves about the evolution of his revolutionary work. This version has been edited for clarity.

Machine Design: Today there are many versions of what constitutes a digital twin. What requirements should be met before we can actually label it a digital twin? Which elements should be present?

Michael Grieves: So, that’s an interesting question. I think you have to flip that to say, “Why do I want to have a digital twin?” And why you want to have a digital twin is that it creates value.

I drive everything from use cases…There are certain things that that I look for: One is singularity of information. I don’t want to have multiple versions because they almost automatically go to being inconsistent.

Another is cohesion, which is the ability of all the parts and pieces to be consistent. In the physical world, if I increase the length of a beam, I increase its weight. In the virtual world, if I don’t have cohesion, I can increase its length and not increases weight, only to find out I’ve got a problem. So those are key characteristics that exist.

But again, you want to drive it from the use case. What use of that information am I going to have? And unfortunately, sometimes on the data side we get caught up with massive amounts of data and no information. We need to be conscious of the fact that we need to be able to use that information and create value in the organization, by either reducing costs or increasing functionality and capability. Or else we ought not to be doing that sort of thing.

MD: Can you provide a real-world example that showcases some of those success factors and where you think it has had real value?

MG: In my book (Product Lifecycle Management: Driving the Next Generation of Lean Thinking), I talked about engine manufacturers—the ability to use information, from sensoring of, for example, jet engines, to being able to predict a particular problem. One of the things I’m very fond of saying is that I really don’t want to have a problem with a jet engine at 30,000 feet. I don’t want to have a problem with it even when I’m sitting and waiting to get on the plane because that means I’m going to be in a long delay.

What I really like to be able to do is to use that information and predict the fact that there is going to be a problem. And the next time, through the maintenance hub, that gets replaced, so I never have the problem. So, I’m really thinking the digital twin can be a crystal ball to predict not only performance but also problems with my particular product, so I can get out in front of it.

The idea around Industry 4.0 is that when there’s a problem, we want to decrease the amount of time to remediate that problem. And, from my perspective, I don’t want to have a problem; I want to predict that thing occurred. Can we do that perfectly? Probably not. But if we can do it in a substantial manner, we can save ourselves a whole lot of effort. And remember, I’m saying information as a replacement for wasted resources. When your [equipment] is down, that’s about as wasteful as it can get, and if there’s a catastrophic failure, with loss of human life, there is no price that you can put on a waste of that resource.

So, I think it’s an opportunity to be able to use that information. And for some of the industries out there, the ability to create a digital twin and understand how it operates is going to be a differentiator in terms if they can afford to do it. In the nuclear power industry, for example, you can’t afford to build stuff anymore. NASA has a huge problem in terms of the cost of building new rocket ships, and so we’ve got to move into the virtual world and work out the problems there. Bits are cheaper than atoms; they continue to get cheaper and atoms continue to get more expensive. And if I’m going to make mistakes, I certainly want to do it in the virtual world, not in the physical world.

MD: This brings us to the role of the engineer and decision-making. How are digital twins influencing decision-making in the design phase? And how does it then follow on through into the production cycle?

MG: It really opens up a whole vista of opportunities for the engineer because, from an engineering perspective, you really can only look at the usual suspects—doing things with physical prototypes and things like that. If I can start the model and simulate, and if I get my physics right, I can look at a wide range of scenarios than I can afford to look at when I’m dealing with physical prototypes.

Let’s take crash testing, for example. I can only afford to crash test a certain amount because it’s very expensive to do crash testing on any kind of either commercial vehicle or even a passenger vehicle. If I’m doing it digitally, I can crash test as many as I want to my heart’s desire… So it’s the ability of having this wide range of capability to look at areas where the engineer wouldn’t have looked and being able to assess that.

The other aspect of the digital twin often I see: Multiple versions of the product having the same problem. Why? Because nobody told the engineer at the beginning that the assumptions made on [that asset] don’t work. And when we get into the operational phase, that information never gets fed back into the engineering phase. And so, the next version has the same problem as the previous versions. Closing that loop, in terms of having the digital twin information—what I call the digital twin aggregate—aggregating all that information from things that are actually operational is key.

My perspective on a quality product is very different than the typical manufacturing company. I mean, their version is quality control occurs in the manufacturing plant. And I would contend that that’s really specification control that gets done there. The quality product is a product that works for the user, to their perceptions. And if I don’t do that, I don’t care how I designed the product and to what tolerances I manufactured the product, it’s not going to be a quality product.

From my perspective, focus on: Did the product operate for the user the way that they had perceived it was going to, or the way I told them it was going to? By having a digital twin, it allows me to not only make the assumptions about the product, but it also allows me to verify that that’s how the product operated.

MD: We’ve come a long way since you developed this concept of the digital twin. Looking back, which milestones stand out and what are the technology advancements that support the idea of a mature digital twin?

MG: Let me emphasize that we are really still in the very early stages of the digital twin. It’s not mature by any stretch of imagination. We’ve conceptualized the digital twin and we’re basically starting to collect information on it. But in terms of having a cohesive digital twin that exists across the all the disciplines and the product lifecycle isn’t there yet. There’s a fair amount of work that needs to be done—you know, technologies and standards and things like that.

My work at NASA, especially with my colleague John Vickers, who was instrumental in defining the digital twin, produced in 2010 a roadmap at NASA that basically introduced the actual terminology. I was not inspired enough to come up with the right name for it…By naming it, we put a stake in the ground and said, “Okay, this is what it is.” I think the computing capability, moving from 2D drawings to 3D was clearly instrumental, and then the ability of having 3D structures that we can then model and simulate to be able to not only do virtual testing but also validation. It is important.

In 2015, an article on the World Economic Forum website said, “Okay, here is what the digital twin is.” I think it started to take off from that particular point. And clearly, the software providers, in terms of being able to not simply have 3D CAD, but also have the modeling and simulation for the behavioral aspect, is critical.

Still, we have a fair amount of work to do in terms of [breaking down] silos of information. It needs to be an integrated version of the product, and it needs to be able to scale up so that we can not only have digital twin of an assembly, but the digital twin of the entire system. And I think that’s why we’re seeing digital twins of not only specific individual components or products, but digital twins of entire systems.

We’re starting to see airplanes fly digitally before they’re actually physically made. I think we’re on the cusp of seeing some really important movements in terms of what we can do with digital twins. Again, sensoring—being able to collect the information from the physical things as they’re running—is critically important.

MD: By extension, can we talk about the idea of the digital twin as an exact replica or description of reality versus being the tool that does predictive analysis and drives reality? How will the digital twin influence future decision-making?

MG: So, that’s been my premise all along. The idea of replication—meaning, I can see what’s happening with my physical product—is important. But I’ve always said that the predictive aspect is where we really want to go. In fact, a few years back I had proposed front-running simulation. It says, we’re going to run a simulation in every product, take all the physical things that are coming out of it and predict forward. What’s going to happen in the future? The product will say, “Hey idiot, if you keep doing the things that you’re going to do, you know you’re going to have a catastrophic failure.”

The ability to predict performance and think about every product that will have a little crystal ball saying, “If you keep doing what you’re doing, and I keep seeing the sensor readings, in two weeks there’s an 80% chance you’ll have a failure in this area. And in four weeks there’s a 90% chance.” So, it’s going to basically give you probabilities of what your problems are going to be based on the sensor readings. Then, as we collect data, we will continue to get better and better at doing those predictions. I’ve always felt that the predictive aspect is going to be where the real value comes.

Too often we concentrate on the requirements of what we know we want products to do and the requirements of what we know we don’t want to do, but we miss the things that we didn’t know it was going to do either positively or negatively. Now we focus on the “do negatively,” because we obviously don’t want those to happen.

But if it does things positively that we didn’t know, it means we don’t understand the system. So, we need to have all four of those categories well defined. The way we do that is to continue to basically run simulations and look for the unusual suspects, if you will, as opposed to the usual suspects that we know about.

MD: How can industry get up to speed? What are the skills that are required, and what can manufacturers do to prepare? How can we onboard them?

MG: The key is to determine what the use cases are that are going to create value in your organization. If you don’t do that, and you basically say, “I want to have a digital twin that does everything.” You’ll never get there; that’s the boiling-the-ocean phenomena. Pick areas that you know are going to create value. But have a plan that basically says, “I may not be able to do this today, but with the computing power that’s coming online, I can do it tomorrow.” So, we shouldn’t be designing simply for what we have capability for today.

I’m fond of saying we’ve passed 55 billion transistors on a chip. By the way, in the 1970s we started with 2,000. By 2030 we’re going to have in the neighborhood of 7 or 8 trillion transistors on a chip. And by 2040, it will be in the hundreds of trillions. We don’t really understand what that means in terms of ability. But it means that we’re going to have tremendous amount of computing power. Looking at what the future is for our future products, and building the sensoring [capability] in…

I mean, if we can’t get information from what our product is doing, we’re never going to be able to understand how to predict either issues or performance of that particular product. It requires not only the digital twin but also the physical twin that says, we need smart, connected and—I emphasize—paranoid products because of the cybersecurity aspect that are going to be critical.

I think it’s a wide range of opportunities. I’m starting to see digital twins of economic systems, of process systems, logistics systems and supply chains. We’re starting to see if we can visualize digital twins of things that don’t have geometry, for example. There are lots of opportunities that are going to allow us to be more effective and efficient in how we use resources.

About the Author

Rehana Begg | Editor-in-Chief, Machine Design

As Machine Design’s content lead, Rehana Begg is tasked with elevating the voice of the design and multi-disciplinary engineer in the face of digital transformation and engineering innovation. Begg has more than 24 years of editorial experience and has spent the past decade in the trenches of industrial manufacturing, focusing on new technologies, manufacturing innovation and business. Her B2B career has taken her from corporate boardrooms to plant floors and underground mining stopes, covering everything from automation & IIoT, robotics, mechanical design and additive manufacturing to plant operations, maintenance, reliability and continuous improvement. Begg holds an MBA, a Master of Journalism degree, and a BA (Hons.) in Political Science. She is committed to lifelong learning and feeds her passion for innovation in publishing, transparent science and clear communication by attending relevant conferences and seminars/workshops. 

Follow Rehana Begg via the following social media handles:

X: @rehanabegg

LinkedIn: @rehanabegg and @MachineDesign

Sponsored Recommendations

Flexible Power and Energy Systems for the Evolving Factory

Aug. 29, 2024
Exploring industrial drives, power supplies, and energy solutions to reduce peak power usage and installation costs, & to promote overall system efficiency

Timber Recanting with SEW-EURODRIVE!

Aug. 29, 2024
SEW-EURODRIVE's VFDs and gearmotors enhance timber resawing by delivering precise, efficient cuts while reducing equipment stress. Upgrade your sawmill to improve safety, yield...

Advancing Automation with Linear Motors and Electric Cylinders

Aug. 28, 2024
With SEW‑EURODRIVE, you get first-class linear motors for applications that require direct translational movement.

Gear Up for the Toughest Jobs!

Aug. 28, 2024
Check out SEW-EURODRIVEs heavy-duty gear units, built to power through mining, cement, and steel challenges with ease!

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!