Machine Design
Advances in industrial robots

Advances in industrial robots

Authored by:
Steve Prehn
Vision Product Manager
Fanuc Robotics America Corp.
Rochester Hills, Mich.
Edited by Leslie Gordon
[email protected], Twitter @LeslieGordon
Key points:
• Industry has not yet developed a truly versatile domestic robotic servant.
• But industrial robots are smarter than ever before.
• An industrial robot’s intelligence come from vision systems and force sensors.
Fanuc Robotics
“Robots with Feelings,” Machine Design, Nov. 3, 2011, p. 76.

Most robots in operation today are industrial types that engage in repetitive tasks. Robots assemble automobiles, weld sheet metal, and load widgets into CNC machines, among other jobs. Price, payload, reach, and speed are some of the design parameters that determine the best robot used for a particular industrial work cell.

Industrial robots are loaded with software that serves as “functional experience modules” and provide data or directions for how a robot will react when executing a task. The modules also let engineers choose particular features to generate programs that perform specific processes.

That said, the humanoid robots of sci-fi films closely resemble humans in almost all respects except that the robots lack emotion. With nimble “hands” and high-powered “brains,” the robots move seamlessly from task to task. Researchers are making progress in developing machines that are more humanlike, but they have a way to go to develop a truly versatile domestic robotic servant. Still, an interesting question arises: Are robots capable of evolving, or are they forever limited to merely executing programs?

On the flip side of the coin, human limitations don’t apply to robots. Consider the number and size of components needed to populate the printed-circuit board found in most cell phones. Unlike humans, robots are not limited by the size of their fingers. Robots can be outfitted with tiny pinchers instead. Nor do they place components in the wrong location. Robots shine at consistently performing repetitive tasks.

Of course, industrial robots are not intelligent in the sense of having conscious thought. They can, however, make decisions that impact their performance. Most tasks robots handle involve moving around physical objects. Robots can be made to be “self-aware” in responding to objects via options for “sight” and “touch.”

Eye, robot
For example, iRVision is a vision feature of a new robot controller from Fanuc, Rochester Hills, Mich. The option gives a robot its “eyes” in the form of a camera. Robots so equipped analyze images to locate and then pick up parts. The controller knows the camera position with respect to the robot and can compare a found position against the coordinates of its working area. Similar to infants learning how to move their arms and hands to grasp objects they see, robots can refine their knowledge of their own movement. Unlike infants, though, robots do so via kinematics — they calculate the mathematical relationships between arm-segment lengths and joint locations to generate part positions and orientations.

At a basic level, a vision system might comprise a camera mounted on a robotic arm. The controller lets engineers create a simple program to direct the robot to look for an object and move toward it. Once the arm starts to move, the program tells the camera to take another image to determine whether the arm has moved in the right direction and far enough to put the object in the center of the camera view. When the controller determines that the object is still too far away, it directs the robotic arm to refine its position. Each move the robot makes changes the camera’s perspective, so running the arm through a series of moves lets the arm accurately home-in on the part. In this sense, the robot can be said to be “intelligent.”

Reach out and touch something
Besides vision, industrial robots can feature another human sense — touch. Close your eyes and consider how your sense of touch can help you insert an object into a hole. You can do this by feel alone. In a similar fashion, tactile feedback lets robots “feel’ how they are engaging with parts. They do so with yet-more-complex force sensors that generate useful information about rotational moment around the direction of force.

In essence, this capability lets robots “feel” to perform such tasks as placing pegs into like-sized holes. Should a peg bind during insertion, the sensor detects the excess force and changes the direction of insertion.

In this case, engineers set up rules in the controller program that dictate how the robot will compensate with respect to measured forces based on general categories. Often, these categories — or so-called “motion schedules” — might need to run in succession to complete a job. A common example is a robot learning to assemble a part. This seemingly simple task can become more complicated when the forces used to secure one part to another could damage the component should too much pressure be applied.

Together, force and vision are expanding the range of problems robots can help solve. For example, during the casting of a part, excess raw material might remain attached to the part. Vision can detect the presence and location of the material and force sensors can help regulate the pressure applied in grinding off the scrap.

On a recent job, a company wanted one of its robots to use vision and force sensing to perform a task that just a few years ago could not have been done with these methods. The relatively simple task of welding stamped metal brackets to a frame was complicated by the need to position the brackets near marks that varied from frame to frame. In addition, bracket dimensions were not identical from part to part. The robot “adapts” by calculating maximum surface contact and ensuring that gaps between the sides of the brackets are even.

The robot first uses vision to find the bracket. The robot then passes the bracket across another vision sensor that maps out the bottom of the bracket’s mating surface. The bracket goes to the frame, where vision locates the marks on the frame indicating where the bracket should be placed for welding. The robot puts the bracket on the correct spot using a force sensor to “feel” that the surface of the bracket is flush with the frame surface. The machine pushes the bracket down and wiggles it to make sure the bracket legs are the same distance from the surface. Last, another robot welds the bracket into place.

In another recent example, a company wanted an efficient way to hang doors on cars. The solution required an iRVision version that supports 3D vision, which is based on laser triangulation. A traditional robot could have mounted the doors only if the cars were in a fixed location and all the doors were perfect — not a realistic possibility. However, vision and force can adjust for the normal imperfections found in typical manufacturing settings. The car was not always in the same exact spot. Door opening had a stack-up tolerance, but variations from car to car were small. Similarly, the doors themselves had relatively small variations in size.

Here, the robot first roughly locates the car and then picks up a door. Next the robot uses force sensing while inserting the door into the body opening. Force sensing ensures the door is centered and that the door surfaces are flush with the body. Another robot moves a camera around the door to measure the width of the gaps at critical places. The controller then calculates how much the robot must move the door before welding it to the car frame.

© 2011 Penton Media, Inc.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.