**By Paul Kurowski Director of Engineering Development Genexis Design Inc.**

**London, Ontario, Canada**

Edited by Paul Dvorak

Finite-element analysis, or the finite-element method (FEM) as we will call it, hides plenty of traps for uninitiated users. Errors that come from idealization and meshing a part can be bad enough to render results either misleading or dangerous, depending on the importance of the analysis.Idealizing and defeaturing a 3D model eliminates small and unimportant details. Sometimes the process replaces thin walls with surfaces, or drops a dimension to work with a 2D representation of the part. Model building also uses simplified descriptions of material properties by, for instance, considering them as linear-elastic materials (many are not), and assigning boundary conditions as rigid supports and time-independent loads. There are many other simplifying assumptions.

The process eventually forms a mathematical description of reality which we call a mathematical model. To solve it with numerical techniques, the math model must be discretized or meshed. (Discretization and meshing are synonymous in the FEM). But mind you, creating a mathematical model is error prone. Here are a few of the things that can go wrong when modeling, even before meshing.

**THE PROBLEM WITH MODELS**A mathematical model can be pictured as a continuous domain with imposed boundary conditions that include loads and supports. Mathematicians say this represents a field-variable problem and is described by a set of partial-differential equations. Examples of field variables include displacements in structural analyses or temperatures in thermal studies. We will focus on a more intuitive structural analysis where displacements are the field variables.

To analyze a structure, we solve its equations. Solving complex equations "by hand" is usually out of the question because of complexity. So we resort to one of many approximate numerical methods. For numerical efficiency and generality, we almost always select the finite-element method.

At the risk of oversimplification, imagine an unmeshed FE model with field variables (displacements for our case) represented by a few polynomial functions written to minimize the total potential energy in the model. The polynomials would have to be quite complex to describe the entire model. To get around that difficult task, the model (a domain) is split into simply shaped elements (subdomains). Now, reasonably simple polynomials can approximate the displacement field in each element. Notice that a continuous mathematical model has an infinite number of degrees of freedom while the discretized (meshed) model has a finite number of degrees of freedom.

The allowed complexity of an element's shape depends on the built-in complexity of its polynomials. For example, first-order polynomials call for elements with straight edges, while higher-00order polynomials allow for more sophisticated element shapes. Obviously, using simply shaped elements to represent a solution domain (our model) calls for many of them to correctly represent both the structure's geometry and its displacement field. Using more complex (and more computationally intensive) elements allows using fewer of them. No universal rule tells which approach is better.

The mesh imposes restrictive assumptions on the displacement field. This is because the field must comply with model geometry and boundary conditions while minimizing the total potential energy of the model. The mesh must also comply with element capabilities to depict the displacement field. If we use first-order elements, the displacement field becomes linear in each element and piece-wise linear in the entire mesh. At this point, you might ask: How well does that piecewise linear displacement field represent the displacement field corresponding to the continuous model? Or, to rephrase the question: What errors are introduced by meshing, a process that imposes restrictive assumptions from element definitions? What type and how many elements should we use to make this error tolerably small?

First of all, the error introduced by meshing is called discretization error. All FEM results are burdened with it. So before using the results, we should prove they are not significantly dependent on the choice of discretization. This requires an exercise commonly called the convergence process. It takes several iterations and is accomplished by adding degrees of freedom to the model, either by using smaller elements (mesh refinement) or by increasing the element order (using elements with more complex polynomial functions) or by both. Plotting several results from the same model meshed with higher order elements or more elements than the previous run should describe a curve that seems to flatten out, indicating results no longer significantly dependent on the choice of discretization. The key word is "significantly." Results are always dependent on the choice of discretization. All we can do is to calculate the discretization error and decide if it's low enough.

Notice that the objective of the convergence process is not to obtain the most accurate solution possible. The objective is to find the discretization error by proving our data does not significantly depend on the choice of discretization.

**A FEW MESHING MISTAKES**Errors from restrictive assumptions imposed by meshing are not confined to errors controllable in a convergence process. Such errors can have serious consequences. For example, modeling a beam in bending with one layer of first-order elements is a recipe for disaster. The mistake can be made in 2D or 3D. Properly representing a distribution of bending stresses across the thickness needs

*several*layers of first-order elements. This is often difficult or impossible because too many elements would be needed. The case calls for using either solid p-elements or converting the geometry to a midplane surface and meshing it with shell elements.

Any mesh has to satisfy two requirements: It must adequately represent geometry, a criteria relatively easy to visually verify. Second and less obviously, it must have the capability to properly model displacement and stress patterns. This is where most serious mistakes are made.

The hazard often comes from automeshers. They are fast and easy to use but they do not prevent the mistakes listed above. Automeshing is purely a process of filling up a given volume (or surface or line) with elements of given shape. Many automeshers know nothing of the solution. They place elements almost wherever they want and how they want. Whether the mesh has the capability to properly model the expected displacement or stress pattern is not their concern. The user, as always, is responsible for avoiding mistakes by making sure the mesh correctly represents geometry as well as the expected displacement and stress patterns.

**Alternatives to FEM**The finite-element method is not the only numerical method that can handle structural, thermal, and other types of analyses. But it has dominated other methods because of its generality and convenient formulation, at least from programmers point of view. Other available numerical methods work with finite differences and boundary elements. However, they miss the generality of FEM and so have been relegated to niche applications.

**You've got errors**Solution and convergence errors are discretization errors. Each differs a bit from the other.

**Solution error**is the difference between results from a discrete model with a finite number of elements and results from a hypothetical model with an infinite number of infinitesimal elements. To estimate solution error, assess the rate of convergence and then predict the asymptotic value.

**Convergence error **is the difference between two consecutive steps that could differ by mesh refinement, element-order upgrade, or both. Let's say an acceptable convergence error is 10%. If the solution converges, the next step will produce results that differ from the current one by less than 10%.

**What makes a good mesh?**A mesh must satisfy several conditions. First, it must properly depict geometry which is relatively easy to verify "by eye." It must not have degenerated elements. To some degree this can be also inspected visually but is better done with a mesh-quality checker. Mesh must have the capability to model the expected displacement and stress patterns. This cannot be verified either visually or with mesh-quality routine, and is where most severe errors are often made. A mesh can look great, pass all quality checks, and still be a disaster. This commonly happens when an automesher, left on its defaults, decides to place one layer of lower-order elements across thin features. One-layer of p elements is easily acceptable because of its higher (and adjustable) order.

Lastly, solid-element meshes look impressive but may hide the severe deficiencies described above. Less visually pleasing shell or beam-element meshes may be better choices.