Machinedesign 6085 12389ltdowntime 0

Predicting downtime with intuitive software inspired by petrochemical process controls

Sept. 12, 2013
It no longer takes a Ph. D. to interpret results from software that monitors machine health and predicts machine failures.
Numerous industries are finding that they must increasingly try to predict when their industrial processes will break down, rather than wait for them to fail. The reason is a quest for “zero downtime” in manufacturing operations. The trend is driving the development of highly advanced software techniques that can analyze raw sensor data from systems that monitor factors such as vibration levels and oil condition. Software uses such information to predict machine health, schedule maintenance at optimum intervals, and warn operators of imminent failures.

Simple examples

To get a feel for the “values” produced by modern day MCM/PFA software, consider a few simple examples from the Tadpole system. The Tadpole algorithm is written in C. In these examples it ran on a National Instruments LabView software/cRIO hardware environment by virtue of a VI wrapper written around the Tadpole algorithm. The VI wrapper also simplified the Tadpole API to the two “values” of interest to MCM/PFA (Tadpole is designed with 25+ parameters for PID tuning and analysis). The VI runs in both the LabView Windows and LabView RealTime environments.

A single-frequency sine wave with slightly changing amplitude gives a high spectrum value because only a single dominant frequency is present. The second graph is a plot of process variable amplitudes over time. They are nearly all the same, indicating a process that is under control, i.e., with a small error: Spectrum = 83.7, Error = 2.86

The first example is that of a single-frequency sine wave with slightly changing amplitude. It gives a high spectrum value because of the single dominant frequency and has a small error. Here the Spectrum value = 83.7 and the Error value = 2.86.

A single-frequency sine wave with a rapidly changing amplitude still gives a high spectrum value because a single frequency dominates, but the error value is larger, indicating the amplitude of the process variable is varying over time. Spectrum = 93.2, Error = 4.44

The second example is of a single-frequency sine wave with rapidly changing amplitude. It gives a high spectrum value because of the single dominant frequency with larger error value. The Spectrum value = 93.2 and Error value = 4.44.

Three dominant frequencies with rapidly changing amplitudes give a low spectrum value with same error value as the previous example: Spectrum = 13.7, Error = 4.44

The final example is of three dominant frequencies with rapidly changing amplitudes. They give a low spectrum value of 13.7 with the same error value as above, 4.44.

Unfortunately, widely used machine condition monitoring/predictive failure analysis (MCM/PFA) software algorithms are complex. It takes highly trained individuals many hours to configure or analyze the results, while the software itself often gives marginal performance.

For example, one widely used MCM/PFA algorithm employs wavelet transforms. The problem is that wavelets require significantly high amounts of data and CPU horsepower to compute a solution, in many cases more than 10 megasamples and 20 to 30% of a quad-core CPU. Such horsepower is not economical when trying to use a programmable logic controller. Worse, wavelet transforms are sensitive to noise and other disturbances, and we all know noise-free data environments are virtually nonexistent in industrial settings.

Another problematic MCM/PFA technique employs neural networks that require training by historic or simulated data. This assumes that a model of the failure already exists. Unfortunately, there is no convenient library of failure models available for me, the “common man.” Neural networks also have the added expense of retraining each time a tuning parameter changes.

A third problematic MCM/PFA technique uses fuzzy logic. But fuzzy logic requires time-consuming custom coding/sequencing, usually by a subject matter expert whose time is expensive. And the predictions are only as viable as the fuzzy-logic rules. Worse, these algorithms become unpredictable or clamp when the data goes outside the range of the rules.

Likewise, Principal Component Analysis (PCA) and Singular Value Decomposition (SVD) are both linear MCM/PFA methods that don’t behave well in the complex nonlinear industrial world in which we work. Worse, their use demands that data of different scales must be normalized which generally trashes the minute details that help with the predictive analysis.

An “ideal” MCM/PFA technique would get around these problems and would require no coding, no extensive configuration, nor specialized training/education to operate. It would allow parameter configuration, by a junior engineer with no previous knowledge of the failure and detect trends,long before the high/low threshold alarms used on typical MCM/PFA data alerts. It would also provide a simple indication of a potential failure that could be evaluated by a junior engineer.

It turns out that the petrochemical process-control industry has evolved software along these lines. Pi Control Solutions LLC in Houston performs petrochemical control-system monitoring. It developed a proportional–integral–derivative (PID) tuning algorithm called True Amplitude Detection – Poles (Tadpole). Tadpole calculates about 25 “values” (simple integer numbers) representing the complex signals indicative of a PID used for closed-loop control of petrochemical processes. These values are generated by a detailed analysis of the complex data coming from the sensors. A change in one of these values indicates a change in the PID control quality.

We have found that two of the “values” produced by the Tadpole algorithm give an excellent representation of the complex signals indicative of a machine’s condition: The “Spectrum value” indicates the signal-frequency distribution. The “Error value” indicates signal amplitude.

A change in one of these two values indicates a change in the condition of the machine. These values stay in a fairly narrow range for a machine operating normally. This simplicity lets a junior engineer write simple code to look for a change in the “value.” Moreover, the junior engineer needs no prior knowledge of the data’s meaning, range, type, or proper value.

The Tadpole algorithm is particularly good at reliably detecting true oscillations on complex signals, handling real-time or logged data, and calculating the “value” with minimal data set. It works with any time period or data sample rate and can detect frozen signals (bad sensors), a rise in white noise (random noise) or jaggedness, and nonlinearities (the signal spending more time on one side of the mean).

The value of a Tadpole analysis becomes more clear by viewing a few examples of the “values” produced from simple signals. We set up various failure test cases using data logs from customer machine-monitoring systems.  We recreated log data by running it through a National Instruments, Austin, analog output card. We then added various error signals to the logged data to create the controlled failure conditions needed for proper verification and validation.

Readers should note that we’ve exaggerated the inserted errors (oscillation, noise, modulation, etc.) in these test cases for clarity. We see more subtle errors in actual machine data.

Insertion of a single-frequency sine wave as an error signal boosts the spectrum value, which the system can recognize as an error. Here, the baseline spectrum = 6.04, the abnormal spectrum = 32.13.

Predictive failure case 1 — Simple bearing oscillation detection: On the left side of the accompanying figure, the log data contains minor bearing  noise (random oscillation), which provides a baseline spectrum “value” of 6.04. On the right side of the figure, a single-frequency sine-wave error is inserted, causing the spectrum “value” to rise to 32.13, which is easily detected as a bearing abnormality. This is significant because we did not need to configure the algorithm so it would look for specific spectral content or teach the algorithm to look for oscillation. We just fed data into the algorithm, and it identified a change in the bearing.

Insertion of noise at about the same frequency as that of the signal causes a slight change in spectrum value but a large rise in the error value. Normal spectrum = 12.67, Normal error = 0.27. Abnormal spectrum = 14.4, abnormal error = 4.78.

Predictive failure case 2 — Noise detection: The accompanying figure depicts our log data which contains minor compressor noise, giving a spectrum “value” of 12.67 and an error “value” of 0.27. The figure depicts the insertion of noise having essentially the same frequency as the data of interest. This causes the spectrum “value” to change slightly to 14.4, but the error “value” jumps significantly to 4.78, again easily detected as a compressor abnormality.

The error “value” changed significantly because of the change in amplitude, but there was a small change in spectrum “value” because the spectral content stayed close to the same. Had we simply added the baseline log data to itself, the spectral “value” would have remained the same.

A baseline spectrum (left) first is attenuated, then mixed with an oscillatory error signal (right). The spectrum value drops as a result while the error value returns to the baseline level. The system recognizes the smaller spectrum value as an error because it implies the signal now has many different frequencies. Normal spectrum = 13.8, normal error = 2.54. Abnormal spectrum = 8.1, abnormal error = 2.2.

Predictive failure case 3 — Lack of Noise Detection: This test case demonstrates the crux of the errors most people encounter in cases where noise goes away (possible bad sensor) or when noise starts to show oscillatory content (possible machine failure). On the left side of the figure, the log data has typical noise (random oscillation), which provides a baseline spectrum “value” of 13.8 and an error “value” of 2.54. In the middle of the figure, we attenuated the noise which caused a significant drop in the error “value.” On the right side of the figure, the noise returns to the original level. But we inserted a steady-state oscillatory error, which caused the spectrum “value” to drop to 8.1 while the error “value” returned to its baseline state, easily detected as an abnormality. A smaller spectrum “value” means the signal is comprised of many different frequencies. In this case, a fast noise component was followed by a slower noise component which was followed by strong oscillatory component.

This response is significant because we did not have to configure the algorithm to look for oscillatory content within the noise or teach the algorithm to look for possible sensor errors. We just fed data into the algorithm, and it identified a change.

A mix of three high-frequency signals gives a spectrum value of 145.1 and a 2.83 normal error. Introducing random noise in the same frequency range and amplitude reduces the spectrum value to 9.59 which the system can detect as an abnormality.

Predictive failure case 4 — Noise Detection on a Highly Oscillatory Signal: The first graph in the accompanying figure provides a data log with a blend of three high-frequency signals from a high-speed rotary-assembly machine, giving a spectrum “value” of 145.1. The second graph shows random noise introduced in the same frequency range and amplitude, giving a significant drop in the spectrum value to 9.59, again easily detected as abnormal. This result is significant because there was no need for a subject matter expert or spectral analysis to detect the problem. The tool itself identified the issue.

A sine wave (right) modulated with high and medium-frequency noise develops a frequency plot containing several frequency components and a true amplitude plot containing several small blue bars. Operators can adjust a chatter value to display only the low-frequency signal of interest.
Predictive failure case 5 — Chatter Tuning Parameter for Selective Frequency Elimination: This machine sensor data has high, medium, and low-signal frequencies. However, only the lower frequencies are meaningful, so it is convenient to remove the fast (random-noise) component. The algorithm conveniently provides a “chatter” tuning parameter to remove various frequency components, letting us derive a spectrum “value” focused on our area of interest. For this test case, a sine wave with a single, dominant frequency is modulated with unwanted medium and high-frequency noise.

With the Chatter parameter set to its default, the value is 2.97 (relatively small, indicating the presence of several signal frequency components) as represented by several large and small blue bars in the accompanying figure.

To eliminate the high-frequency noise, the Chatter value is adjusted to 1.075 resulting in the blue bars showing just the low-frequency pure sine wave without the high-frequency noise (smaller blue bars are gone). The Spectrum value jumps from 2.97 to 18.3. This simple display lets even inexperienced personnel rapidly focus on data of interest.

With all these predictive failure cases the TadPole software is not a prognostics tool, it still takes a good engineer to design and recommend corrective actions. However, this is the best MCM/PFA tool we have found to analyze data quickly and economically. 

Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

Resources: National Instruments Corp.
Tandel Systems Inc.

Sponsored Recommendations

MOVI-C Unleashed: Your One-Stop Shop for Automation Tasks

April 17, 2024
Discover the versatility of SEW-EURODRIVE's MOVI-C modular automation system, designed to streamline motion control challenges across diverse applications.

The Power of Automation Made Easy

April 17, 2024
Automation Made Easy is more than a slogan; it signifies a shift towards smarter, more efficient operations where technology takes on the heavy lifting.

Lubricants: Unlocking Peak Performance in your Gearmotor

April 17, 2024
Understanding the role of lubricants, how to select them, and the importance of maintenance can significantly impact your gearmotor's performance and lifespan.

From concept to consumption: Optimizing success in food and beverage

April 9, 2024
Identifying opportunities and solutions for plant floor optimization has never been easier. Download our visual guide to quickly and efficiently pinpoint areas for operational...

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!