Cognex Corp., Natick, Mass.
With so many vision systems available today, identifying the most suitable for a particular application can be daunting. Vision parts that just perform necessary tasks are not enough; there are several factors that must be considered for successful deployment.
First consideration: Setup
Vision applications don't usually require elaborate runtime interfaces, but operators often interact with vision systems during part change-overs, to alter tolerance parameters, and to determine causes of failure.
Better vision systems allow quick and easy configuration of these and other application facets without coding in Visual Basic or proprietary script-based language. Some vision software also includes network management tools to simplify remote administration of multiple systems, including backup, image playback, firmware upgrades, and context-sensitive help.
A couple tips: Operator interfaces that display images allow immediate analysis of failed parts, as well as pass/fail statistics, to help operators quickly identify trends. Some vision machinery can also be modified and turned off and on by operators.
Part location tools
Machine vision requires software to find parts within the camera's field of view. Setup with this software is typically the first step in any vision application, from simple robotic pick-and-place operations to assembly verification tasks. It's also the most critical step, as it determines application success or failure.
Locating parts in an actual production environment can be challenging. First, vision systems are trained to recognize parts based on a model image. However, even tightly controlled manufacturing processes vary in the way parts appear to the vision cameras. Therefore, vision part-location software must be intelligent enough to compare model images to actual objects moving down a production line, regardless of which side of the part faces the camera, its distance from the camera, shadows, reflections, line speed, and normal appearance variations.
Preprocessing tools are software that alter raw images to emphasize target features and minimize unwanted ones. This prepares images for more powerful vision tools and can significantly improve overall robustness. Preprocessing tools can increase the contrast between the part and its background, mask insignificant and potentially confusing image features, eliminate hot spots reflecting off of surfaces, and differentiate smooth and rough textures.
As we'll now explore, image-preprocessing tools also optimize trained models by sharpening the edge contrast of characters and filtering out extraneous background in the image — so markings on products are read more reliably.
Character reading and verification
Whether vision parts are reading stamped alphanumeric codes on automotive parts or verifying date and lot code information on medicine bottles or packages, several capabilities are paramount for character reading and verification.
Statistical font training
This capability builds a font by learning models of characters that appear in a series of images. The images should include multiple instances of each character, and span the full range of quality likely to occur in production. The resulting font is tolerant of normal variations in print quality, whether due to poor contrast, variable locations, degradation, or stroke-width variations. Unless a designer knows in advance that every code will be marked with the same quality seen in the reference images used to teach character models, statistical font training can be crucial to the success of reading or verification applications.
Instant image recall
This capability enables line operators and technicians to quickly and easily view failed images on a display. Whether a camera jarred out of position or a damaged label causes failure, it is important to know immediately why failure occurs, so corrective action can be taken.
Consider a packaging plant, in which container materials, labeling equipment, printing methods, and ambient lighting can vary considerably over time. Here and in similar applications, designers should perform tests on large samples of good, marginal, and poor-quality labels to see how the vision performs under variable real-world conditions. Because character positions can shift from label to label, it's also a good idea to enlarge the region of interest around the character string. This will help determine how reliably the vision system's reading and verification tools operate within a larger search region.
Next Page: Repeatability and codes
Repeatability and codes
If an application involves critical dimensional measurements, the vision system's gauging tools must be accurate and perform with high repeatability. Full suites of gauging tools can allow the right fit for measurement requirements — without requiring designers to write custom scripts or functions.
The standard approach is to make a new vision system measure a part's key dimension dozens of times to test gauging repeatability. (This should be done without changing part position, lighting, or other variables.) Record and analyze the measurements, making sure that any variance is well within measurement tolerances for the application.
Code reading is another vision function. Industrial environments often require vision systems that can read degraded or poorly marked 2D data matrix codes. Sometimes, the code is placed in a slightly different location on each part as well. Other times, the part material (such as metal, glass, ceramic, and plastic) and the marking method employed (such as dot peen, etching, hot stamping, and inkjet) vary. Only better cameras and software can filter these scenes.
Two other considerations are code quality verification and read speed. Look for products that can verify code quality to established standards. This can provide valuable information about marking process quality. Also, fast production line speed and throughput requirements need fast readers; some vision systems available can read more than 7,200 codes per minute.
To evaluate read speed, present a well-marked code to the vision system and have it read the code hundreds of times under pristine conditions to determine the number of reads per minute. If read rate under optimized conditions is less than 100% problems will arise: For example, at a production speed of 2,000 parts per hour, a read rate of 99.7% would fail to read the ID codes on 48 parts in just one eight-hour shift.
After establishing read speed, designers should run a more challenging read-rate test to determine the impact of factors such as line vibration, variable lighting conditions, and extremely high line speeds on the vision system's reading performance. To do this, present a large sample of codes of good, bad, and marginal quality to the vision system. At the same time, simulate vibration and motion blur by shaking the part and sliding it back and forth beneath the camera as it acquires an image. This gives a rough assessment of how well the read rate withstands real-world conditions.
As more vision systems are used throughout manufacturing, centralized management becomes increasingly important. Vision-system networking is essential to share data, support decision-making, and quicken integrated processes. For example, networking enables vision systems to transmit pass/fail results to PCs for analysis, or communicate directly with PLCs, robots, and other factory automation devices.
If a vision system must be linked to PCs at the enterprise level, choose a system that supports standard networking protocols. TCP/IP client/server enables vision systems to easily share results data with other vision systems and control devices over Ethernet without any code development. SMTP (simple mail transfer protocol) enables designers to immediately receive emails on PCs or cell phones when a production problem occurs. Likewise, FTP (file transfer protocol) allows inspection images to be stored on the network for later analysis, while the standard Internet protocol Telnet enables remote login and connection from host devices. DHCP (dynamic host configuration protocol) allows a vision system to automatically receive its network IP address from a server, enabling true plug-and-play performance. Finally, DNS (domain name service) allows designers to assign each vision system a meaningful name (such as bottling line system one, for example) instead of a numeric IP address.
Integrating vision with PLCs, robots, and automation devices requires other tools. Industrial Ethernet protocols (such as EtherNet/IP, PROFINET, MC Protocol, and Modbus TCP) link vision to common PLCs and other devices over Ethernet cable to eliminate complex wiring schemes and costly network gateways. Another option here are fieldbus networks, including CC-Link, DeviceNet, and PROFIBUS. (Note that a protocol gateway accessory is usually needed to add a vision system to a fieldbus network.) Finally, RS-232 and RS-495 serial protocols are needed to communicate with most robot controllers. Here, make sure that the vision systems include software for easy remote control and monitoring over the network from any location.
Accessories can go a long way towards ensuring trouble-free system integration and, in the case of lighting, can even make or break the application. Lights are helpful for averaging out the varied ambient light conditions and minimizing misreads caused by varied surface characteristics.
Nearly every machine vision solution requires a unique lighting approach. Some solutions are ring lights, which provide soft, even illumination from all directions; back lights, which create maximum contrast between a part and its background; and dark field lights, which provide low-angle illumination for imaging of part surface irregularities.
Another option is communications peripherals such as I/O modules and network gateway modules to support easy, quick connectivity between the vision system and PLCs, robots, and other factory automation devices and networks. Similarly, operator interface panels allow easy, plug-and-go set-up and deployment, plus ongoing monitoring and control of vision systems without a PC. Touch-screen interfaces simplify networking here.
A final option is rugged IP and NEMA-rated metal camera enclosures to withstand dust and moisture without requiring a separate enclosure accessory. If a plant environment is especially harsh or requires frequent washdowns, external enclosures (prequalified for use with the system) are recommended.
While some vision applications are complex, many are addressed with affordable standalone vision parts. These vision systems should not require a PC during configuration or in production mode. Plug-and-go performance enables quick configuration, right out of the box. Just as important, the vision system should not require designers to roll a PC onto the factory floor every time changes to the application must be made. Some standalone vision also connects to monitors for live-image display without a PC.
Final tip: Get support from vision suppliers who work to understand application requirements and provide resources during application development and systems integration, as well as deployment.
For more information from Cognex, call (877) 264-6391 or visit www.cognex.com.