AI Gains Physical Intelligence and Transforms Robotics & Automation Design
UR, a division of Teradyne Robotics, is among a dozen robotics companies—including BYD Electronics, Siemens and Alphabet’s Intrinsic—that are actively integrating physically based simulation and AI models into their software frameworks and robot models.
As of last November, when UR unveiled AI Accelerator—an extensible AI-driven robotics tool that enables development of AI-powered applications for UR e-Series cobots—the company has been industrious in its pursuit of bringing the physical AI solution to market.
In this video, Anders Billesø Beck, VP of Innovation and Strategy, UR, joins Machine Design’s editor-in-chief, Rehana Begg, in a discussion on how artificial intelligence meets the physical world through advanced robotics. Physical AI refers to the integration of AI algorithms into physical equipment, such as robots equipped with sensors and actuators. Referencing current use cases, Beck describes five applications that demonstrate how the systems make decisions based on perceiving, navigating and manipulating their physical environments.
Advancing AI and Robotics Tasks in the Physical World
Behind the complexities of creating robots that can move through changing environments is an expertise in neural networks. In its current iteration, this specialization unlocks “generative physical AI” or providing information about the spatial relationships and physical rules of the physical environment. The objective is to facilitate more natural interactions between humans and machines. With physical AI, robots can now demonstrate transformative operational capabilities within various settings.
Generating these advanced insights and actions invariably demands strategic partnerships between companies with domain expertise in complementary disciplines. UR’s AI toolkit runs on its PolyScope X software platform and is powered by NVIDIA Isaac. Specifically, UR is integrating Isaac Manipulator, a reference workflow built on still other NVIDIA accelerated computing packages, to expedite advanced AI robotics applications. With built-in demo programs, users can now unlock features such pose estimation, tracking, object detection, path planning, image classification, quality inspection and state detection.
READ MORE: Q&A: How Contextualized Data and AI Agents Enhance Manufacturing Automation
The immediate payoff has been the validation of a cluster of new AI use cases for robot arms, or manipulators, that can seamlessly perceive, understand and interact with their environments.
“The [AI Accelerator toolkit] really lowers the bar dramatically for new AI innovation,” said Anders Billesø Beck, VP of Innovation and Strategy at UR. “It gives them access to the newest, the most powerful AI models, so that their development will be way faster.”
Generative AI Tools Signal Long-Term Opportunity
Companies like UR no longer need to be convinced about the potential of AI. A recent McKinsey report, Superagency in the Workplace, projected the long-term AI opportunity to be $4.4 trillion in added productivity growth potential from corporate use case. The challenge, noted in the report, is that short-term returns are unclear. Yet, more than 92% of companies in that survey of 3,613 employees (including 238 C-level executives), plan to increase their AI investments.
Market reception of UR’s AI Accelerator has been positive and UR has already exceeded 20 partners that have developed AI offerings, Beck said.
In line with this objective, Machine Design spoke to Beck to better understand how UR is applying AI for robotics to unlock creativity and efficiency.
Editor’s Note: The transcript below has been lightly edited for length and clarity.
Machine Design: Can we start our conversation with an explanation of what we mean by physical AI?
Anders Billesø Beck: Absolutely. I think we all can recall how over the last couple of years AI has been dramatically on the rise across non-physical applications. That’s everything from ChatGPT to Microsoft Copilots, and so forth. We all use it more and more for text generation. We commonly see image generation using generative AI. Even in gaming, we see a lot of the generation of beautiful scenes and games, and so on. All of that is now AI-generated.
That’s has been working tremendously. There’s no doubt that we’ve seen the big changes that are going across a lot of industries—which is sort of one of the remaining frontiers of AI. Yet, to make it have a dramatic impact on the physical world, this is where things get challenging. I think your audience knows how difficult it is to design things that manipulate the change that impacts the physical world.
Physical AI is the ability to take the advancements that we see in AI today into the physical world—especially through robotics. It’s actually also for autonomous driving. It’s also a lot of other physical interactions. But there’s no doubt that robotics is the place where physical AI will have access to the world to really make an impact across a lot of different applications.
MD: Can you expand on this idea? We see everybody using the terminology, and we don’t really know what is behind it. You’re calling in from NVIDIA’s GTC conference, where you’re rolling out five different applications that you can speak to on this call alone...Walk us through each application. The first application is the 3D Infotech dynamic metrology.
ABB: 3D Infotech is a company that does metrology. It’s not recognizing objects, but it’s measuring for quality inspection using robots. It is scanning parts and understanding if they are up to par with whatever manufacturing specifications. Using robots for that is a way to really accelerate tasks, and have much more flexible cost, and effective machinery than traditional, larger measurement machines, and so on.
3D Infotech is adopting AI for both corrections, making it much faster to identify the parts and to plan the scanning. The challenge is that using a robot for scanning parts is actually relatively challenging. If you want to move the robot around and you have a part or multiple parts, and you want to do in-line quality inspection that could be for a moving line and so on, then you can use AI to both recognize the part, plan the robot’s motion around the part and use the methodology systems to measure and scan the part. In this case the AI element is for image recognition, which is the thing that AI is very, very strong at, and is being adapted even more.
READ MORE: TI Offers Functionally Isolated Modulators for Precision Robotics Control
But also, for accelerated planning. So, to plan very smooth and precise robot motions around the path, which could in many cases have different orientations. So, it would need some level of adaptability. It’s an exciting evolution of the 3D Infotech system, which not only uses AI to accelerate deployment, but also make the system much more user-friendly—especially to people who are not robotics experts and for those who do metrology on parts where positioning could vary over time.
MD: And to clarify, you are using the UR3e cobot. I read that you’re scanning a workpiece, comparing it with CAD models, and then it highlights dimensional inaccuracies. And then, projecting this onto to the workpiece surface. What happens next?
ABB: That’s the main function. But you’re right: As a manufacturer, metrology could be in development departments, it could be for inbound quality inspection to validate if the parts you produce match what you expect. And 3D Infotech has done it in such an intuitive way that you scan the part with very high accuracy down to a few microns in inaccuracy. And then it can project color coding directly onto the workpiece.
So, it would recognize the part, and it would back project a color scheme that would highlight, maybe even to the manufacturer’s tolerances, or just in general divergence from this 3D Model. It would put a beautiful color coding on top. You would be able to see upfront—without having to use complicated computer systems—where these parts either need to be reworked, or where you may have casting challenges if the casting is not accurate, or has machining tolerance problems, and so on. So, it’s a very intuitive way of using AI in such a qualitative metrology workflow.
MD: The second example looks at CNC machine tending, and I believe it’s the with T-Robotics using GenAI-driven programming. What is behind this application?
ABB: This is one of the more exciting ones. T-Robotics is a Norwegian and a Silicon Valley company that wants to dramatically simplify applications where you need some level of rapid changeover.
CNC machine tending is a good example. Often, with CNC machining, you would run moderate-sized batches, anything from 10 parts to a couple of 100 parts. So, having the ability to do changeovers, not only on the CNC machine, but also in the robot system that feeds the CNC machine, is mandatory to be able to do small lot size applications. T-Robotics is using generative AI to make that really simple.
What they have built is a chat prompt. So, you can prompt the robot. It generates a robot program. Think about it as a ChatGPT. Instead of generating text, it generates robot actions. You can explain to it what the parts look like, how you want it structured in an infeed. And it generates the robot program behind it. I think it’s a good example of where you can see trainer-to-AI simplifying deployment.
Many of us recognize that one of the things that generative AI is great at is to replicate things that it sort of knows in advance. If you think about ChatGPT, it’s been trained on all the text on the Internet and can then generalize and make a good response to that challenge. The same thing here. It’s been trained on every sort of possible action scenario and can then generalize the actions to take across a lot of different applications.
It’s a great way to see how generative AI is going to be used to simplify robot programming and make it very simple.
MD: The third example is on reinforcement learning assembly. This one features a UR5e cobot, where you’re executing single-arm gear assembly using reinforcement learning. In other words, the cobot locates a part using AI Accelerator-based perception and then uses a reinforcement learning skill to complete a contact-rich assembly process. Expand on this one.
ABB: Assembly has in general a huge potential for automation. If you look at general manufacturing, most analysis shows around 40% of all processes in general manufacturing are assembly processes in various ways.
Doing assembly with very tight tolerances has notoriously been very difficult to automate with robots. Even using force control—like our robot has. It makes some of the assemblies simple. But as soon as it’s something like gearboxes, where you have multiple gears that need to be meshed just perfectly together, it’s extremely difficult to program in a reliable way, where you can just keep producing and producing and producing.
This is a great feature where AICA, which is one of our Swiss partners, has developed a toolbox for reinforcement learning-based assembly. And what that means is that you would show the robot and guide the robot slightly through an assembly of something like a gear to get it to mesh right. And it would then learn the end state of where everything needs to fit in.
Then, over time, it’s going to practice and rehearse until it generates an “AI policy” of how to do this. The brilliant thing is that it’s going to be an adaptive policy. Gears will be in different positions at every new part, so it will not be the same motions going through all the time. It will be a bit of a tactic on how to wiggle these things in until they fit perfectly.
What they've shown is doing something very complicated. The assembly application we have at our booth is an assembly on a gearbox from a Porsche Taycan. They’re just showing how it’s very fiddly for a human to do. The robot can do it in a matter of 10 sec. It just slides the whole thing in.
This is a perfect example of a skill where you’re mimicking or using reinforcement learning—which is really like a trial-and-error flow. The robot tries a few times, and then within a couple of hours, it trains this policy to do assembly.
A lot of mechanical engineers can imagine how powerful it would be to have a robot that can assemble parts that have basically zero tolerance clearance. It needs to be perfectly right, and it needs a very sensitive policy to accomplish that. I see that having a great potential.
And it’s been built like a plugin for our PolyscopeX software, which means that you can build a regular robot program, you can pick parts, you can place parts, you can do all the things that a robot would do, and then you have this smart skill that you train to do the actual insertion as a feature.
MD: The fourth example takes it a step further to bimanual assembly. So, we’re really talking about complex manipulation tasks. What exactly is bimanual assembly?
ABB: Bimanual is when you’re starting to use two arms. And I think when we think about a lot of applications...we always see people using both arms to do assembly. Traditionally with robots, most things have been done with one arm and it’s often straightforward to design applications with one arm. But using two arms gives you the flexibility that humans have, too.
Our partner, Acumino, out of Seattle, is doing what they call bimanual assembly. The technique is slightly similar to what AICA is doing on reinforcement learning. But it’s using another AI technique that’s called learning by demonstration. It’s also one of the techniques we see a lot deployed with humanoids, and so on, which is also often bimanual.
READ MORE: Highlights from TIMTOS 2025: Adapting to Changing Market Demands with AI
In this case they have a teaching model. You put on gloves fitted with a tracking sensor, and you show the robot how to do a task that could be put together. They’ve been working with customers to do assembly on bicycles. They’ve been plugging in USB cables. They’ve been doing a lot of different applications that involves grasping parts and doing fiddly assemblies, and maybe also handling tools, drilling a hole, screwing in a screw, and so on. And all of this can be trained by demonstrating to the robot how to do it. Then the robot learns this as an AI model.
The brilliant thing is, as soon as it has learned the model, then it would use cameras and forces to recognize if everything is proceeding as intended. This is so that it will remain robust, even with the small variations that every robot application would have, especially for assembly.
But maybe you need the fiddly process of putting in a saddle pole on a saddle, on a bicycle and so on. This is also taking assembly to the next step, where you will use AI to train by demonstration. Teach the robot, build an AI model for how it should do it, and then also using the superpower of AI, which is that it can adapt to these small changes according to your original demonstration.
I see it as a great potential that’s been attracting a lot of good attention from the audience at GTC. As well, to see things that are all in the real world. These are things that you can buy. You can deploy them to factories today, but still leveraging the frontiers of AI, all the new generative AI that comes along. I’m really excited about this one, too.
MD: Oh, for sure. Okay, the final example: This is with Groundlight and focuses on workpiece detection for streamlined picking. This is where the AI Accelerator facilitates a bimanual UR5e cobot’s ability to learn complex manipulation tasks from human demonstrations. In other words, the robot detects a workpiece and generates a robot program for picking. Tell me more about this.
ABB: Groundlight is also one of our American partners. They are experts in anomaly detection and quality assurance. They have also built the plugin that plugs directly into our new PolyscopeX software, leveraging the AI Accelerator to do the processing.
They have a workflow to support any application. The thing is, when you put robots into applications or manufacturing processes that used to be manual, you often realize how many small, subtle quality checks the manual operators do. That’s one of the first steps that you often must tackle as a manufacturer.
Groundlight has a perfect solution to that problem. You basically train the robot on what “good” looks like. So, you would show them the right parts. You add that to an AI model. If it sees anything that is different to these ideal models, it will prompt on the robot screen, and say, “Hey, I saw something. This seems to be weird. Is this a failure? Is this a fault or not?” And then, you train the model. The first time it will see it, you say, “Okay, this is a bad part. Please take that away.” And it generates a program for how to do that.
You can also say, “No, this is alright.” Then it adds it to the good model. Within a couple of weeks, it will build a very sophisticated database of good parts, bad parts and will automate all that sorting.
If you end up having a bad batch of parts coming along, let’s say one year down the line, the robot will automatically generate, “Hey, something is wrong here.” It will actually recognize that and it would prompt. And maybe you can then immediately start your QA (quality assurance) process and prevent shipping defective products to the market. It’s a super intuitive workflow. It's very powerful.
They have also integrated what is called a VLM—visual language model—which also allows you to prompt the robot system to look for specific things. You can say, “Have all the screws been inserted into the part?” Then it would check. Or you could say, “Do you see four screws on this picture?” It would check that.
We see it in the Universal Robots manufacturing line, as an example. We make sure we validate the operators in all the manual steps to ensure they have put all the parts ready so that nothing is forgotten. Some of those check steps can be fully automated with a system like Groundlight, as it can analyze the process and, with just a text prompt, act as a quality inspector, while still maintaining the same workflow.
You could say, “Hey, this this looks bad. Is this truly bad?” It would say, “Yes, it is. There’s a missing screw.”
We’re demonstrating that at our Nvidia GTC booth where we have a number of parts. Some of them just have very subtle scratches that immediately are recognized. We have a few screws missing that immediately get recognized.
These are all AI power tools that really supercharge how easy it gets to build and deploy automation solutions. And what I really like is that AI is now becoming normalized. These are all things that will help you. Some of them will have bigger features, like the bimanual. That’s the whole system. But many of the small things we talked about, like the AICA assembly process and Groundlight, are supporting technologies that just make deployment of robots easier.
It’s similar to when you have AI tools on your iPhone. Well, it would recognize faces when you take pictures and whatnot. And it’s not a big thing, but it really helps make deployment of robots easier because it takes this superpower of AI that can generalize across things. It takes these very fine tolerances—this very deterministic behavior, which robotics often have had the challenge of meeting—and now it generalizes it, and makes it much, much more flexible.
MD: This perspective is really fascinating. And I appreciate how you’ve moved the needle forward on physical AI deployments. Are these applications for the AI Accelerator germane to Universal Robots? What plans do you have to extend to other Teradyne brands?
ABB: AI is important for all the Teradyne brands. Both for MiR, the mobile robot platform, as well as for the Universal Robots brand. UR launched this AI Accelerator in November 2024. And already today, as you can see, we have four products that are ready and implemented within a bit more than four months, which is mind-blowing to me. It just shows how much activity there is in the AI community.
And they are so excited that we can offer them a path to market with these new technologies. And we actually have more than 20 partners already developing the AI accelerator, which is gonna be great within a year. We’re going to see so many different exciting applications for the MiR brand. AI is also becoming a natural part of how they evolve the product.
Just last year, MiR launched the MiR 1200 pallet jack. That is a pallet jack capable of moving around in warehouses, in manufacturing and transporting pallets around. It can have a full forklift and can lift pallets up and move them around. What’s unique about the MiR 1200 pallet jack is that it already uses AI for pallet detection.
You can say, “Well, why? Why does it need AI?” It does need AI, because palettes in the real world have huge variation. They can be damaged. They can be old, they could be dirty, they could be halfway wrapped into shrink wrap, because that’s a part of the palette wrap. It may be black wrapping or may have a stapled-on shipping label. In the real world there’s tons of variation.
Classical methods of palette detection are actually very fragile. It’s quite often difficult, especially in logistics scenarios, or if you have goods coming in from multiple vendors that have been long-term underway. The MiR 1200 pallet jack is extremely robust against all these things, because they’ve been training an AI model for this pallet detection. It’s been trained in more than 100,000 real-world scenarios that we’ve collected from a lot of our customers and early access partners. They also then trained an AI model based on multiple millions of simulated pictures. So, it really covers a very large, large variation of different things.
This makes it very unique. It's super-robust. And it could…even while it’s moving in and trying to talk with the pallet, something goes wrong—if it’s broken somewhere underneath the pallet—so you can't proceed. And so on. The AI algorithms would detect that immediately and advise the pallet jack to stop and move back, try again or even announce it to the operators of the system.
We see this as really a natural element to a lot of the new products we build, where this flexibility that AI provides as a superpower will spread across a lot of our products and the new things we do in the future.
MD: That sounds very promising, and I think it’s a great point to park our conversation for the moment.