Altair
Engineers have used machine learning techniques for a long time. Today’s modeling technology empowers engineers further with geometric deep learning and scaling tools.

Crunching the Numerics: Computer Aided Engineering

July 14, 2025
Simulation software leaders embrace machine learning to hyper-accelerate FEA/CFD analysis.

Since the first FEA solver, Nastran, was developed for NASA in the 1960s, the simulation software industry has contended with a number of hurdles. For one, while the software (FEA, CFD, CEM) is sophisticated and capable, it has a steep learning curve. The age-old computing adage, garbage in/garbage out, holds especially true for simulation software and it can be all too easy to go wrong from the outset.

Getting reliable results often requires simulation analysts experienced in the time-consuming process of defeaturing, meshing, defining load cases and boundary conditions, and otherwise prepping a CAD model for analysis.

Then comes the analysis itself, which may take hours to days to process depending on the size and complexity of the project; any changes to the geometry or load case require repeating the lengthy process over. Finally, there’s the expense, not only for the software/hardware itself but the cost of the domain experts and an extended product design stage.

As a result, simulation doesn’t tend to be used in the early design phase. Instead, it’s typically employed as validation of a near-finished design, so major flaws are caught virtually before the expensive physical prototype testing phase. It’s not surprising, then, that end-users of simulation software are predominantly deep-pocketed industries where the reward for innovation is high but the cost of product failure is significantly higher.

According to CIMdata industry analysis, four industries—aerospace, automotive, high tech/electronics and heavy equipment—accounted for 76% of the $10 billion in revenues the simulation and analysis (S&A) market gleaned in 2023. Impressive numbers, to be sure, but the S&A industry has long pushed to “democratize” the use of its software to a broader base of product designers and manufacturers as well as to the engineers who would benefit from posing “what-if” questions during the design phase.

The ideal solution, then, would be to get quick feedback on the viability of each design change. While speedy results are possible in traditional simulation, it comes at the cost of accuracy. That is, results may drift farther away from real-world behavior as speed increases. The holy grail for the S&A industry then is to facilitate a significant increase in output while preserving the reliability of the analysis.

To achieve that goal, the industry is pursuing two changes to the way it’s done things for the past 60 years. The first is to increase the speed of traditional physics-driven simulation by developing solvers that leverage the parallel processing capabilities of modern graphics processing units (GPUs) rather than relying solely on a CPU’s serial processing.

READ MORE: How CAD for CAM Solutions are Addressing the Top Five Manufacturing Workflow Challenges

In addition to parallel processing, the CUDA and Tensor cores in an NVIDIA graphics card—to give an example—are designed to solve the types of matrices and partial differential equations used by simulation software to describe the physics of a system under changing conditions.

A 2022 report by Jon Peddie Research (Accelerating and Advancing CAE) found that many S&A players have either built simulation software from the ground up to take advantage of GPUs or added GPU-accelerated functionality to its historically CPU-bound software.

The result is simulation that runs either significantly faster on local and/or cloud-based GPU hardware or allows for more complex advanced simulation tasks to be run in the same amount of time. A few implementations claim a speed up of 50 to 100 times faster but most are in the more sober 2 to 6 times faster range.

Although GPU acceleration may cut analysis time from days to hours or hours to minutes, the holy grail is to achieve near real-time feedback. For that, the S&A industry is betting big on machine learning (ML), a subdomain of artificial intelligence in which datasets of known inputs and outputs are used to train a data-driven model capable of making predictions for behavior that lies outside the initial training data.

According to Cambashi, mergers and acquisitions in the CAE/EDA/Simulation software market have seen a flurry of activity in recent years, in part driven by the promise of AI, the industry analysis firm says.

In March 2024, for example, Cadence announced it would acquire BETA CAE, including +ML toolkit add-on, for $1.24 billion. Similarly, EDA software firm Synopsys’ acquisition of Ansys in January 2024 in part reflects the value potential of the CAE industry leader’s Ansys SimAI, AI+ and TwinAI products. Most recently, in January 2025, Siemens’ announced that its $10.6 billion acquisition of Altair Engineering would result in the “most complete AI-powered portfolio of industrial software” in the industry.

A heady claim, to be sure, but not wide off the mark considering Altair’s lead in machine learning capabilities, according to Gartner’s The Magic Quadrant for Data Science and Machine Learning Platforms. The report, released in May 2025, positions Altair as the only S&A firm to share the “leader” quadrant with the likes of Google, IBM, AWS and Microsoft.

With its PhysicsAI application, especially when combined with the company’s cloud-based storage and high-performance computing service, Altair One, customers can expect simulation speed upwards of 1,000 times over traditional CAE solvers, the company says. For many simulation tasks, that translates to the quasi real-time feedback the industry, and their customers, have sought for decades.

READ MORE: AI Gains Physical Intelligence and Transforms Robotics Automation Design

To get a sense of how machine learning enables this kind of performance upgrade, it’s important to delve into how ML works and how it fits in a simulation workflow. One misconception is that machine learning is being positioned to replace traditional physics-based solver simulation. Instead, historic simulation analysis forms the basis for training a machine learning model designed to make predictions of system behavior not included in the training data.

First, though, it’s important to understand that machine learning is a broad domain encompassing numerous types of ML models and techniques for training them. A survey of any depth is beyond the scope of this article, but an overview of two machine learning models and how they function may provide some perspective.

One common machine learning model is the Reduced Order Model (ROM) approach offered within simulation products from Ansys, Altair, Comsol and Siemens, among others. In essence, a ROM is a simplified or surrogate model that aims to approximate the behavior of complex high-fidelity systems but without requiring as many computing resources.

In the context of simulation, the high-fidelity system (or Full Order Model) is the system composed of thousands to millions of discretized elements (or volumes). Since solving the partial differential equations that describe the behavior of each element or volume can take hours or days, a simulation analyst might use 10s to 100s of simulations to train a ROM using one or more model order reduction methods, including intrusive, non-intrusive or projection based.

Whatever the method, if trained on sufficient data, the resulting simplified model retains enough information about the system to approximate the behavior of a full simulation but, due to its reduced mathematical complexity, can return results in seconds while requiring relatively little computing power.

According to Dr. Fatma Kocer, Altair’s VP of Engineering Data Science, traditional surrogate modeling approaches do have their uses but can run into certain practical limitations. 

“With traditional methods, there’s no way to use past simulation data,” she says. “For example, if you had run a type of simulation for 25 designs last year, there’s no way you can take that and train a machine learning model with the traditional methods that rely on parameters. That data set is not parametric and it’s not parametizable, because one simulation may have run on one topology, while your colleague may have run on another topology, so there’s no way to merge all of them with a consistent set of parameters.

“But because PhysicsAI, which is geometric deep learning, works directly on the mesh, it can train machine learning models with data sets composed of different topologies, different dimensions, different geometries.”

READ MORE: Sending Signals: Sorting Out Single-Cable Sensor Connectivity

Built into Altair’s HyperMesh simulation software, PhysicsAI provides users with an interface that steps them through the machine learning process, helping avoid common missteps along the way. For example, the application will flag simulations that are outliers or otherwise inappropriate to include in a training data set.

In addition, the interface prompts users to set aside a certain percentage of simulations to use as test data. Once the model is trained, its predictive accuracy can then be tested against simulations it hasn’t “seen” previously.

At the core of PhysicsAI, however, sits Geometric Deep Learning (GDL), a type of graph neural network (GNN) designed to deal with data structured as nodes and the connections between them. Imagine the atoms in molecule and the chemical bonds between them or, in the context of CAE, the vertices of a mesh and the edges that connect them. To this, a GDL incorporates non-euclidean geometry into the model as an inductive bias, or a set of assumptions to predict outputs of given inputs.

As a result, GDL models are able to learn the relationships between the mesh geometry and the physics of the simulation. In practice this means that PhysicsAI’s GDL model allows for data sets composed of similar simulations but that each contain different numbers of nodes, elements, load cases and boundary conditions.

That capability is particularly useful for end-users who already have a wealth of historic simulation data to draw from. With it, they can create models without having to spend the time and resources generating multiple new simulations for ML training purposes. In addition, since it’s a neural network, GDL also allows for the use of a machine learning technique called transfer learning.

Essentially, this means a model trained for one task can be used as a starting point for a separate model trained for a related task. In addition to cutting the amount of data needed to create models for each project, Kocer says GDL and techniques like transfer learning will lead to foundational models like large language models (LLMs) but built on geometry and physics rather than words.

“We will start looking into the equivalent of large language models but for engineering; it could be called large engineering models, large physics models, large geometry models,” she says. “We could then deliver it to customers so they don’t have thousands of simulation points. We have customers who currently train with tens of thousands of simulation points, but there are also companies that are lacking that.

“So, if you have a foundation model, you don’t have to wait until you accumulate that much data. You can actually fine-tune them using far less of your own data. So, it won’t be LLMs; it will be LEMs or large engineering models.”

While foundational models like these portend an exciting future, it does highlight some of the realities of coupling machine learning with simulation in the present. Namely, depending on the complexity of the engineering problem being posed, training an ML model with sufficient predictive accuracy may require datasets with 100s+ simulations, plus the resources to generate them.

Moreover, training these models may also require access to considerable computing resources. For this, Altair offers its cloud-based high-performance computing (HPC) platform, Altair One, where customers can upload simulation datasets and access one or more of NVIDIA’s enterprise-level GPUs to train their models.

Even so, machine learning and engineering simulation are each complex and seemingly arcane fields in their own right. Both can require specialized experts, data scientists and simulation analysts respectively, to fruitfully exploit. While simulation software companies strive to remove the complexity from each discipline via various software tools and streamlined workflows, both may always entail steep learning curves to master.

That said, there’s every indication that machine learning will shortly become every bit transformative for the simulation industry, and its customers, as it has so far proven for many others.

About the Author

Mike McLeod

Mike McLeod is an award-winning business and technology writer with more than 25 years of experience, as well as a former engineering trade publication editor.

Sponsored Recommendations

June 27, 2025
Ensure workplace safety and compliance with our comprehensive Lockout/Tagout (LOTO) Safety Training course. Learn critical procedures to prevent serious injuries.
June 27, 2025
Join our expert webinar to discover essential safety control measures and best practices for engineering a truly safe and compliant industrial environment.
June 25, 2025
An innovative aircraft with electric drives combines the best of both worlds. The cross between drone and helicopter could mean significantly faster and more efficient air emergency...
June 25, 2025
Effective when other materials fail, ceramics are particularly suitable for applications requiring wear and chemical resistance, sliding characteristics or biocompatibility. Discover...

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!