Machine vision has been a scary subject for engineers for several years. Some have bad experiences to blame, some have chosen to avoid it altogether, and some haven't even considered it as a potential solution to their automation challenges.
The good news? Over the last 10 years, vast improvements have occurred in both hardware and software platforms, making machine vision a powerful addition to every engineer's toolbox.
Pick a winner
Software development tools, once a sore spot for machine vision, are rapidly becoming one of its primary strengths. Today, designers can choose the application development environment that suits them best, whether it's a text-based language like C++ or Visual Basic, a graphical programming environment like LabVIEW, or a configurable environment that can whip out an application with little or no programming.
Choosing a software platform involves taking a close look at such factors as ease of use, scalability, hardware compatibility, and cost — not just for developing the application, but also to deploy it. Ease of use in the later stages of application development is particularly important, as is the assurance of future support. Nothing costs more than having to reinvent yesterday's solutions.
Many vision software products are available for evaluation before purchase. Take advantage of this “test drive” period to compare how easy each is to learn and use. In most cases, the differentiating factor will be how quickly you can get an application up and running. It's also a must that the vision package you intend to use supports the development environment you're considering.
Software scalability can be measured in many ways, but one factor to keep in mind is how easy it is to move from one phase of development to the next. Four major phases make up the software development process for machine vision; pre-design, design, prototyping, and deployment. It is important to find a software package that moves freely and quickly among these stages.
Test for success
Pre-design, as the name suggests, is everything prior to designing the application. For this, a PC-based system works best, potentially with a low cost IEEE 1394 camera for image acquisition.
Pre-design should begin once a general idea of the hardware setup has been developed. It helps to have several examples (good and bad) of the product to be inspected, some decent lighting, and an idea of where the camera will be mounted. Next, mount the test camera where the inspection is to take place and start shooting test images.
To simulate realistic conditions, be sure to give the test camera many different views of the good and bad product samples. Another trick is to vary the test lighting to simulate different weather conditions and internal lighting scenarios. Also adjust lens focus, modifying it to produce somewhat fuzzy images. As you capture images, you won't be doing much processing; this is simply image acquisition onto a local drive.
Basically, the goal is to create a super-set of all conditions that may occur while the deployed application is running. This way, you can easily account for all of the special cases in the system before it's actually fielded. Once the images have been captured, it's time to move to the design phase.
Pushing the envelope
The design phase is where to experiment with different tools in the vision software package to see which ones work best for the inspection at hand. Using your suite of images, make sure that the image processing strategy you develop can handle all of the conditions imposed on the test images.
Many vision software packages include menu-driven “assistants” to let users try out different tools and explore “what if” conditions on sample images. Many of these assistants can also generate code, giving users a huge head start in software development.
In most cases, the design stage occurs on a PC-based system. The additional processing power is beneficial when it's necessary to execute several machine vision algorithms while the application is tested and tweaked. During this stage, designers needn't concern themselves with setting up triggering or industrial communication. This phase is about making sure a robust set of algorithms is available to successfully inspect products, regardless of external conditions.
It's also where designers usually find out if there is some special case that can't be solved with the application software or algorithms developed to that point. Most applications involve an unusual case or two. The best way to handle them, in practice, is by notifying the operator that the camera is out of focus or recycling the product in question for a second inspection.
Once the design phase is complete and the application software is able to catch most of the unique cases, it's time to move to the prototyping stage, which usually includes a hardware transition. During prototyping, most applications are moved from the PC-development platform to smart cameras, compact vision systems, or industrial PCs. Although this is not always the case, a standard desktop PC will not suffice for most industrial applications.
Prototyping also involves integrating image acquisition, lighting control, encoders, proximity sensors, and other system components. Some software packages will easily transition from the PC to a more embedded target, while others will require starting from scratch and developing the application again. A software platform that scales easily from one target to the next will save precious time here, dramatically reducing time to market.
During this phase, it is also important to validate that the target provides full functionality for the software developed. Check as many conditions as possible to make sure the algorithm created with stored images actually works with live images under the real conditions. The more quality time spent here, the greater the probability of a successful deployment.
Once the algorithm has been tested on real images — using the actual hardware that will be in place during live inspections — it's time to move to the deployment stage. Ease of use comes into play again here. Some development environments are extremely easy to deploy, while others are not. In cases where only a single deployment is necessary, this may not be a big factor.
However, if the application involves multiple targets in multiple locations, then ease of deployment becomes a bigger issue. During this phase, you may see a blend between scalability and ease of use. Besides being scalable among different phases (i.e. being able to run on different types of targets), scalability regarding number of deployments becomes an important factor as well.
Given all that's involved in the development of machine vision software, designers brace for the worst when they ask, “How much is this going to cost me?” The answer to that depends on several things.
The first, naturally, is the cost of the vision development software itself. Next, is the cost of deployment. Some development environments are expensive up front, but cost very little to deploy. This doesn't benefit the single deployment end user, but for someone who is distributing an application to multiple systems, it becomes an important benefit.
The third cost is associated with the learning curve required to become proficient with the software. This can be calculated in man-hours devoted to learning the software and it may or may not include the cost of classes required to accelerate the process. This is yet another reason why ease of use is such an important factor.
The fourth and final cost is the cost of maintenance. Upgrading to a higher speed processor or a larger resolution camera, extending the life of a solution, is often a difficult task; be sure to consider this when evaluating software. Some development environments make it a breeze to upgrade hardware, while others make it a nightmare.
To read more about machine vision systems and how to select critical components, visit motionsystemdesign.com's Knowledge FAQtory and look for links that will connect you to related articles and information.