Articulate has two meanings — formulated with clarity and effectiveness — or a design with segments and sockets. Articulating robots are both — jointed arms used to precisely express motion tasks by moving products, assembling parts, and operating tools.
These robots are most often moved in Cartesian arrangements, because engineers are comfortable thinking of space in terms of X, Y, and Z coordinates. However, Cartesian axes are often carried perpendicularly, which draws more power. They also maintain less stiffness than parallel robots. Finally, most articulating robots have reduced payload when arms are fully extended, and can only reach a limited area.
Adept Technology Inc., Pleasanton, Calif. makes intelligent parallel robots loaded with vision and packaging-management software. Here, parallel has the same connotation that it does in circuits: The arms are not physically parallel, but all link to a common base and then come together at another point.
Adept's parallel robots are the only units with a four-arm rotational platform. In this design, four hung arms are free to swing with the platform and extend in length. These motions (usually concurrent) demand highly coordinated control, but push speed and acceleration limits: Certain models can sweep along and then back through a U-shaped path of 40+700+40 mm in less than half a second — moving a 6-kg payload as fast as 10 m/sec. (Just for comparison, one VS-Series articulating unit from DENSO Robotics, Long Beach, Calif., reaches to 850 mm, with full-circle 25+300+25-mm cycle times from 0.49 sec and repeatability from ±0.020 mm.)
“In industry, parallel kinematics is a general term for multi-axis motion in which each actuator affects each axis — in all degrees of freedom,” explains Stefan Vorndran of PI (Physik Instrumente) L.P., Auburn, Mass. In contrast, serial kinematics is more traditional and simple, in which axes are independent. “The market share of parallel robots, such as our hexapod, is still small, but steadily growing,” continues Vorndran. Another common parallel-kinematic arrangement is tripods. Camera tripods are one example: Adjusting one leg affects tip, tilt, and height.
Hexapods were first used in commercial flight simulators, because of their ability to move in all six degrees of freedom very precisely and quickly, while also controlling the center of rotation — the pivot point — to any position inside and outside the hexapod structure. “The versatility and ability to produce motion similar to that of the human hand is also spreading hexapod use in the medical field,” notes Vorndran.
Sensors for smarts
No matter the mechanical linkages used, the latest frontier in robotics is upfront programming — often with teaching by sensors. One such system gaining momentum is based on 3D vision. Not just for blockbuster movies, 3D mapping can speed installation of robotic workspaces and increase accuracy. However, some of these systems average $50,000 per setup, which can be prohibitively expensive, and require calibration.
One new 3D software package allows engineers to use lower-cost cameras — for example, those sold for webcams — to make robotics automatically tailor movements to an application: Spatial Vision 3D vision software developed by Universal Robotics from a machine-learning program called Neocortex, developed at NASA and Vanderbilt University. “Spatial Vision software will allow us to set a new price-performance point,” says Roger Christian of Motoman Inc., Dayton, Ohio.
Until now, only open-loop gain scheduling was reliable enough for robotic control; earlier adaptive-control types earned a bad reputation for inconsistent motion. Spatial Vision software captures data four to five times a second, giving robots the realtime input necessary to react to physical environments.
The sensor-software design has up to millimeter accuracy — suitable for discrete conveyor-based packaging and handing. How does it work? “Point two cameras at one workspace, and the software aligns their images,” explains Hob Wubbena of Universal Robotics. “First, two photos are taken of the active work area. Then, the software reconciles them for a 3D workspace representation.” From there, objects can be recognized using common algorithms, or in the case of the robot, a red diode is placed on the robot arm to identify a specific XYZ location for tracking.
ACS Motion Control, Eden Prairie, Minn., makes fully integrated machine controls. “Integrated methodology enables true-gantry servo algorithms, with decoupling of center of gravity and yaw — plus Cartesian cross-axis compensation and adaptive multi-axis control,” says Jason Goerges, engineer at ACS.
In comparison, typical network-based control requires costly intelligence at the controller for motion programs, profile generation, and user interfacing … and at each drive node for servo loops, commutation, and additional HMIs. “Plus, network drive nodes don't usually have realtime access to servo information of one another, so they limit multi-axis performance,” adds Goerges.
The dedicated ACSPL+ motion language that runs on the ACS MC4U controller (which also supports five IEC-61131-3 languages on a virtual PLC) has a simple programming environment, even for vector moves, camming, and hexapod, delta, and SCARA robotics.
Network-based solutions with only a PLC or PLC-like software environment have limited capabilities for programming multi-axis motion: “Many motion paths are very difficult or nearly impossible to implement using IEC61131-3 function blocks,” says Cameron Sheikholeslami, control engineer at ACS.
Standardization goes global
To make use of robotics easier for engineers, some manufacturers are building to universal standards. The latest from DENSO Robotics, both maker and user of small assembly robots, is a VS-Series six-axis articulated robot. (Maximum moment of inertia at the last joint is 0.045 kg·m2 — twice as good as other robots, to allow for flexible end effectors.) The arms are also ANSI and CE compliant for global deployment, and some are UL listed for the U.S. and Canada. “Global packaging manufacturers are increasingly competitive, which has spawned new metric adherence in production and packaging environments,“ confirms John Dulchinos, president and CEO of fellow robot manufacturer Adept Technology Inc.
In another effort to make robotics programming more standardized, DENSO Robotics is collaborating with National Instruments Corp., Austin, and integrating NI software and hardware into its arms. The new ImagingLab Robotics Library is from NI Alliance Partner ImagingLab, located in Lodi, Italy. It communicates directly with DENSO controllers to command and control their robotic arms through LabVIEW software.
Says Dylan Jones, principal scientist at Genzyme Corp., Cambridge, Mass.: “We used LabVIEW to integrate a VS-6577 robot with spectral analyzers into an automated analytical test station without having to learn a robotics programming language.” The ImagingLab library is an off-the-shelf program, and Genzyme estimates that it will boost analytical throughput from (now speedier) test-station arms tenfold.
National Instruments sells other robotics control tools: New LabVIEW Robotics 2009 software is one example. It can import C/C++, .m files, and VHDL, and communicate with most sensors and actuators through built-in drivers — so engineers can focus on their own intelligence. The $2,000 software also includes algorithms for motion functions: Engineers typically load the programming (plus that from processing, third-party, and prebuilt robot platforms) onto embedded and FPGA hardware.
The software's strength is its unifying environment: “When building a new robot, one must typically start from scratch. With no software standard, there is little opportunity for code reuse or sharing,” explains Dr. Dave Barrett, professor at Olin College and former V.P. of engineering at iRobot Corp., Bedford, Mass. “We need supported industrial software to build autonomous mobile robots that can sense, think, and act. I have spent 15 years trying to come up with the best robotics programming language … LabVIEW accomplishes that.” Built-in modules provide obstacle avoidance, inverse kinematics, and search algorithms to help robots plan optimal paths.
Signal + power transmission
Indeed, it's easy enough to feed power (electrical and mechanical) through moving machines with fixed parts. But full 360° rotation is often used in robot bases — as in work-cell robots and end-effector grip mechanisms — and in 360° cameras for remote robot operation in hazardous environments. How is power (and data) supplied through these dynamic robot linkages? The answer is slip rings, installed to connect rotating machine segments to ones that remain stationary.
“Not all robotics use slip rings,” explains Steve Black, a business development manager for Commercial Slip Rings, Moog Components Group, Blacksburg, Va. “Its true that many robot applications work in a limited area, and don't need full rotation; here, robots are programmed to limit joint rotation. However, many applications — industrial, marine, and defense — are faster and more effective if robots can fully rotate without restriction. Here, slip rings provide smoothest operation — for more than 100 million maintenance-free revolutions.”
The other option is cable wraps. These can twist and tangle, and require continual maintenance. In addition, cables can introduce interference into data, compromising its integrity. That said, there are cases in which cables must be used instead of slip rings. The latters higher cost is one consideration. “Often the increase in performance with slip rings offsets additional cost,” says Black.
The Moog Components Group makes slip rings in many configurations. For example, the AC7036 combines two in one unit for multi-circuit combinations. An outer capsule slip ring has up to 24 power circuits rated at 15 A, and an inner slip ring has 2, 5, 10 A, or coax signal circuits. The AC7036 slip ring is 2.9 to 6.5 in. long with a 3.1 in. diameter, and runs to 150 rpm.
The ultimate challenge: Wafer handling
Modern semiconductor wafers can hold thousands of electronic chips. Measuring about 300 mm in diameter, these brittle and fragile wafers are moved rapidly around fabrication facilities with wafer-handling robots. These robots are set up the way most other robots are trained — manually, through a teach-and-playback cycle. Here, a robot is moved into a pickup position that is literally eyeballed by a technician; when the position looks acceptable and passes tests, it's saved. The same process is used to set wafer dropoff and other positions; then the robot's controller, motors, and encoders move it to those points during operation.
The problem with this manual approach is that often, a single wafer might be worth thousands of dollars — so any broken wafers are costly … and hand-teaching robots is an inexact approach that leaves wafers vulnerable to mishandling. For starters, humans can only discern distances down to half a millimeter or so. Does this mean that wafers are moved slowly during manufacture, to protect them from imperfectly programmed handling? Just the opposite. In fact, here's where the application gets really interesting: Semiconductor tools themselves are expensive — $5 million or more. For this reason, manufacturers run them as quickly as possible, to get the most from these tools.
“Moog saw that traditional setup techniques were lacking for wafer handling,” says Paul Sagues, staff design engineer with Moog Industrial Group, East Aurora, N.Y. “For this reason, we developed a system to allow robots to calibrate themselves.” Moog had experience in designing controls for the industry's chemical-mechanical planarization or CMP tools: One earlier Moog design controls CMP polishers with 20 coordinated servo axes to pick up wafers, polish them to within a few atoms of flatness, and then replace them.
“For the new challenge, we developed a method and apparatus called Autocalibration Technology, to load on our higher-end motion controllers (specifically, their BX-300) and a matrix of sensors and touch sensing,” explains Sagues. Autocalibration Technology leverages laser sensors and mappers, commonly found on semiconductor robots and in tools. When the robot or wafer moves past a sensor, the controller immediately captures encoder position data as a calibration point. Better control is required because the software also uses touch sensing; the robot controller constantly measures motor torque. When the robot touches an object, the torque change must be discerned from normal friction-induced fluctuations.
Moog first built a demonstration prototype and presented it to OEMs. Initially, there was some resistance to adopt the system. “Semiconductor engineers are by necessity more conservative and hesitant to change, partly because semiconductor processes are so exacting, and prone to errors if methods are changed,” explains Sunil Murthy of Moog. Because engineers were accustomed to eyeballing robotic motion setups with reasonable success, there were also doubts as to the value of automating setup. “However, our robot calibration system can set points to within 50 µm in a few minutes — a great increase in accuracy over the hours-long old approach,” adds Sagues. “Setup accuracy is most important in vacuum environments, where eyeballing robot positions is done at atmospheric pressure, and prone to change when the tool is pumped down to vacuum.”
For fun and experimentation
Robotics lend themselves to inspiring creativity through experimentation and prototyping. “For that, we have intelligently designed robot kits and components, plus solutions to common problems — and tons of information on doing hobby robotics,” says Jim Frye, owner of Lynxmotion Inc., Pekin, Ill. One Lynxmotion product is a $300 robot called the Bipedal Robotic Articulating Transport; the set is compatible with the company's Servo Erector Set. Connect at lynxmotion.com.
For more information
Moog Components Group
Adept Technology Inc.
ACS Motion Control
Visit for a free motion simulator.