Controls that use augmented reality could help individuals with profound motor impairments operate humanoid robots to feed themselves and perform routine care tasks such as scratching an itch and applying skin lotion, according to researchers at Georgia Institute of Technology. The web-based interface displays a “robot’-eye view” of surroundings to help users interact with the world through the machine and help make sophisticated robots more useful to people without experience operating complex robots.
The researchers looked into how “robotic body surrogates” that perform tasks similar to those of humans could improve the quality of life for people with disabilities. “Our results suggest that people with profound motor deficits can improve their quality of life using robotic body surrogates,” says Phillip Grice, a recent Georgia Institute of Technology Ph.D. graduate. “We have taken the first step toward making it possible for someone to purchase an appropriate robot, have it in their home, and benefit from it.”
The researchers used a PR2 mobile manipulator made by Willow Garage, a robot manufacturer for the studies. The wheeled robot has 20° of freedom, with two arms and a “head,” and can manipulate objects such as water bottles, washcloths, hairbrushes, and even an electric shaver.
Showing its capabilities as a body surrogate, a PR2 controlled remotely by an individual with profound motor deficits picks up a cup in a research laboratory at the Georgia Institute of Technology. (Credit: Phillip Grice, Georgia Tech)
In their first study, a group of 15 participants with severe motor impairments were each given a PR2 to use. Participants learned to control the robot remotely, using their own assistive equipment to operate a mouse cursor to perform a personal care task. Eighty percent of the participants could get the robot to pick up a water bottle and bring it to the mouth of a mannequin.
Compared to able-bodied persons, the robots are limited. But participants could perform tasks effectively and showed improvement on a clinical evaluation that measured their ability to manipulate objects compared to what they were able to do without the robot.
In the second study, researchers provided the PR2 and the new controls to Henry Evans, a California man who has been helping Georgia Tech researchers explore assistive robots since 2011. Evans, a stroke survivor who has very limited control of his body, tested the robot in his home for seven days and not only completed tasks, but also devised novel uses combining the operation of both robot arms at the same time such as using one arm to control a washcloth and the other to use a brush.
Robotic body surrogates can help people with profound motor deficits interact with the world. Here, Henry Evans, a California man who helped Georgia Tech researchers with improvements to a web-based interface, uses the robot to shave himself. (Courtesy: Henry Clever/Phillip Grice, Georgia Tech)
“The system was very liberating to me, in that it let me independently manipulate my environment for the first time since my stroke,” says Evans. The researchers were pleased Evans developed new uses for the robot, combining motion of the two arms in ways they had not expected.
“When we gave Henry free access to the robot for a week, he found new opportunities for using it that we had not anticipated,” says Grice. “This is important because a lot of the assistive technology available today is designed for very specific purposes. What Henry has shown is that this system is powerful in providing assistance and empowering users. The opportunities for this are potentially very broad.”
The interface allowed Evans to care for himself in bed over an extended period of time. “The most helpful aspect of the interface system was that I could operate the robot completely independently, with only small head movements using an extremely intuitive graphical user interface,” Evans said.
Image shows the view through the PR2’s cameras showing the environment around the robot. Clicking the yellow disc allows users the control the arm. (Courtesy: Phillip Grice, Georgia Tech)
The web-based interface shows users what the world looks like from cameras in the robot’s head. Clickable controls overlaid on the view lets users move the robot in a home or other environment and control the robot’s hands and arms. When users move the robot’s head, for instance, the screen displays the mouse cursor as a pair of eyeballs to show where the robot will look when the user clicks. Clicking on a disc surrounding the robotic hands lets users select a motion. While “driving” the robot around a room, lines following the cursor on the interface indicate the direction it will travel.
Building the interface around the actions of a single-button mouse lets people with a range of disabilities use the interface without lengthy training.
“Having an interface that individuals with a wide range of physical impairments can operate means we can provide access to a broad range of people, a form of universal design,” Grice says. “Because of its capability, this is a complex system, so the challenge was to make it accessible to individuals with limited control of their own bodies.”
The PR2 is a research and development robot that has two arms and a head on a wheeled base that allows it to move around the environment. (Courtesy: Phillip Grice, Georgia Tech)
Although the results of the study demonstrated what the researchers had set out to do, they agree improvements can be made. The existing controls are slow, and mistakes made by users can create significant setbacks.
The cost and size of the PR2 would need also to be significantly reduced for it to be commercially viable, Evans suggests. Kemp says these studies point the way to a new type of assistive technology.