The eyes of motion

April 1, 2006
With some help from machine vision systems, automated mechanical systems can align components, read barcodes, guide manipulators, and inspect their own

With some help from machine vision systems, automated mechanical systems can align components, read barcodes, guide manipulators, and inspect their own work. Vision and mechanical systems can share the same sensor to measure, find, and orient objects. Without machine vision, engineers must employ additional components such as encoders, proximity sensors, limit switches, and keyways. Recognizing how a machine vision system works, what factors can alter its capabilities, and how to choose components eases integration into a motion control application.

Piecing parts together

The heart of a machine vision system is the lens-camera combination, and each is specified separately. Engineers must choose a lens based on application constraints and a camera based on the control system. Illumination is also important as it provides contrast between objects; area sources, ring lights, incandescent bulbs, and LEDs all provide illumination.

Besides these elements, there must be a component to capture the steady stream of image data that the camera outputs. Frame grabbers do just this. These boards plug into an expansion-bus slot on a host computer and contain on-board intelligence and memory. To define frame grabber parameters such as frame rates and capture triggers, end users can take advantage of drivers in software. An alternative to using frame grabber boards for image capture is utilizing hardware built into the computer and camera.

A host computer is then needed to run image acquisition and processing software. While commercial-off-the-shelf software is easy to use and requires little programming knowledge, custom application software may help a system operate more efficiently. However, customizing software can consume at least half of development time. Once software is in place, a machine vision system forwards collected information to a motion control system.

Managing constraints

Choosing the right machine vision components begins by defining an application and its constraints. For instance, a driverless vehicle restricts a vision system differently than a pick-and-place robot assembling printed circuit boards. These constraints, as well as desired results, influence system setup.

Resolution, the total number of pixels in an image, is determined by dividing the largest object's physical size by the smallest critical dimension.

Area and line scans are constrained by a camera and object's relative motion. Unlike machine vision cameras, film cameras have an area-scan format, meaning they acquire an entire image, or area, at once. They are used when the object can pause (by stopping directly, tracking it with the camera, or pausing it with a short-duration stroboscopic flash) during the exposure. Line-scan cameras, on the other hand, read one line of pixels after another. A rastered image appears when the object moves past a sensor along a line perpendicular to the pixels. These cameras are employed when objects move past at a steady high speed.

Image distance is the length between a lens and image plane. For large format and high-magnification lenses, image distance constrains camera-mounting space.

Image contrast is the quantitative difference between bright and dark pixels. It provides information contained in an image and depends on color, illumination, lens quality, and a camera's electronic properties. Recall that image sensors are linear sensors, whereas human eyes are logarithmic sensors. In other words, a scene that looks sharp to a human eye may not show anything valuable to a machine vision system.

Available physical space limits an image acquisition system's size, but must accommodate all components. From a camera's viewpoint, direct lighting sources must reside far enough away from an object to evenly illuminate it.

Lighting is a critical, yet often misunderstood factor. Improper lighting can wash out desired image information and highlight unnecessary features.

Cleanliness varies in each environment. A case in point is an industrial environment where dirt and dust collect on lens optics and lighting sources, lowering illumination levels and reducing contrast. Machine vision components — especially actuators in moving fixtures — may introduce their own dust and dirt into semiconductor fabrication environments, potentially ruining thousands of chips.

Environmental concerns such as temperature, humidity, vibration, and ambient illumination can not only harm equipment, but also limit system capabilities. For instance, a machine vision system cannot measure a part's width to 0.02 mm if the part is vibrating with amplitude of 0.2 mm.

Considerations

When developing a machine vision system, the first step is deciding whether to build it in-house or use a qualified integrator. Third-party integrators are suggested, as they have extensive technical expertise and can recommend component suppliers. Once this is decided, engineers can begin choosing machine vision components.

Camera selection is an important choice, and a significant specification is sensor size. Sensors were once available in 1/4, 1/3, and 1/2-in. sizes only, but now come in sizes up to 90 mm. Larger sizes offer higher resolution, but require larger lenses and increase the entire system's space requirement. Dividing sensor size by image (lens-to-sensor) distance gives the camera's angular field of view (FOV) in radians. Then, dividing an object's size or scene by this angle determines the distance from the object to the lens.

In most cases, engineers match a camera's resolution to a task's critical dimensions. It is important not to confuse image resolution with a motion control system's resolution. For example, semiconductor processing equipment reaches submicron motion control resolution, which visible light optics cannot physically achieve and which is irrelevant to a vision system reading wafer ID codes.

Next, designers must consider the lens and its focal length f. Required focal length depends on required FOV size F, sensor size S, and the camera's image distance v: f = (Fv) / (S+F).

Sufficient illumination and the amount of time a shutter remains open determine the size of a lens' opening, or aperture. The larger an aperture and the brighter the illumination, the faster a camera captures images. Lens manufacturers specify apertures as the ratio of focal length to lens opening, often called the f number. Relatively large lenses have f numbers below 4; those above 5.6 are called slow because their small apertures take significant time gathering enough light to form an image with adequate contrast. Generally, engineers should employ the fastest (lowest) f number possible.

In addition, a lens must mate with a camera. Therefore, lens mounting should form a light-tight seal around the lens, holding it rigidly against the camera's body. Many lens-mount standards borrowed from photographic or video-surveillance applications work well for small-size (1/2 in. or less) sensors. For 35 mm or larger sensors, mounting becomes problematic and may require a custom design.

When choosing light sources, designers strive for fixed illumination levels that will produce an evenly lit scene. Levels that change frequently are often due to a flickering or aging source. Flicker creates rapid changes that a person can't see, but that foil short-time exposures. Aging causes slow decreases or increases in overall output, which washes out detail. As a result, machine vision integrators usually choose LED light sources for their stability and reliability.

A scene's size and geometry determine illumination. When illuminating large scenes, one might consider a vast array of bright lights strategically arranged to fill the space an object moves through. The two basic geometries are a flat even area source — common when silhouetting objects — and ring lights that fit around a lens and illuminate an object's visible face.

An object's surface morphology, color, and finish also affect light-source selection. Objects with a complex morphology, such as machine parts, create shadows that change as they move. Shiny surfaces are especially difficult, as they appear dark from most angles, but flare brightly when reflecting a light source directly into the lens.

For more information, contact Edmund Optics at (800) 363-1992, visit edmundoptics.com, or write the editor at [email protected].

Self-operating vehicles

One demonstration of machine vision's flexibility is the 2005 DARPA Grand Challenge, in which driverless vehicles raced across 132 miles of Mojave Desert from Los Angeles, to Las Vegas. Of the 23 starters, five finished in less than ten hours and the winner included machine vision as their primary sensor. Sensors distinguished hazards, such as rocks, ditches, and animals that couldn't be sensed otherwise and also helped the vehicles recognize and follow the correct driving path.

Sponsored Recommendations

MOVI-C Unleashed: Your One-Stop Shop for Automation Tasks

April 17, 2024
Discover the versatility of SEW-EURODRIVE's MOVI-C modular automation system, designed to streamline motion control challenges across diverse applications.

The Power of Automation Made Easy

April 17, 2024
Automation Made Easy is more than a slogan; it signifies a shift towards smarter, more efficient operations where technology takes on the heavy lifting.

Lubricants: Unlocking Peak Performance in your Gearmotor

April 17, 2024
Understanding the role of lubricants, how to select them, and the importance of maintenance can significantly impact your gearmotor's performance and lifespan.

From concept to consumption: Optimizing success in food and beverage

April 9, 2024
Identifying opportunities and solutions for plant floor optimization has never been easier. Download our visual guide to quickly and efficiently pinpoint areas for operational...

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!