Roundabout wissanu01/istock

New Computer Model Helps Robots Navigate Like Humans

A new model simulation from researchers at MIT helps robots navigate new environments through exploration, observation, and learned experiences.

One of the hurdles of advanced robotic artificial intelligence is programming. If an engineer programs a robotic arm to reach out and grasp an item, it needs to know the item’s location and how much pressure to apply. Later, if the engineer wants to repurpose that robotic arm to grab a different arm, it cannot apply the same subroutine. The engineer must reprogram to grab the different item. This is true of most robotic programming. Soft grippers, force feedback sensors, and machine vision have helped eliminate problems of interaction and movement, but reprogramming is still an essential step.

MIT researchers have developed a new navigation system that allows robots to navigate their environment like human beings. The system enables the robot to explore their environment, observe other agents, and apply past lessons learned from previous trials to their current object. Popular motion planning algorithms create a tree of possible decisions until the best path has been determined. These are step-by-step plans that will determine the best path via a process of elimination. These algorithms, however, are rarely learned: They are pre-programmed steps from a library available to the robot.

The new planner programming model from MIT looks to help robots create more efficient navigational plans by combining traditional tree-based decisions and neural networks. (Credit: PhonlamaiPhoto/iStock)

In the MIT researchers’ paper, “Deep sequential models for sampling-based planning,” they demonstrate the advantages of their model in two settings: how to navigate a room with traps and narrow passages, and how to navigate an area while avoiding collisions.

“Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate,” says co-author Andrei Barbu, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute. “But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents.

“The thousandth time they go through the same crowd is as complicated as the first time,” Barbu continues. “They’re always exploring, rarely observing, and never using what’s happened in the past.”

The model is a combination of a planning algorithm with a neural network that learns to recognize paths leading to the best outcome. A real-world application of this model would be autonomous vehicles navigating intersections and merging into traffic. The researchers are pursuing such applications in partnership with the Toyota-CSAIL Joint Research Center.

For robots and autonomous vehicles, navigating obstacles such as roundabouts will prove difficult to achieve without collisions. The system from MIT looks to help AI navigate these layouts successfully. (Credit: wissanu01/iStock)

In a traditional motion planning system, the robot explores the environment by expanding a tree of decisions that covers all possible choices in a given space. The robot then goes through the tree determining the best way to reach the door.

The new model starts with a few pre-programmed examples to navigate different types of environments. The neural network learns what makes these examples successful by interpreting the current environment around the robot. This includes physical conditions like the walls, the actions of others in the environment, and the main goal. The model combines the exploratory behavior from earlier methods with the new learned method.

The underlying planner, known as Rapidly-exploring Random Trees (RRT), was developed by MIT professors Serfac Karaman and Emilio Frazzoli. The neural network mirrors each step created by the search tree to predict which decision will yield the best result, hence directing the robot’s actions based on the decision’s success. If the network doesn’t have high confidence in its decision, it lets the robot explore the environment like a traditional motion planning system.

The other aspect of the research is how the robot handles other agents in its environment. In the simulation, the robot is immersed with other agents circling an obstacle. The robot must navigate around these agents, avoid collisions, and reach the goal (such as an exit). One example of this is a roundabout. “Situations like roundabouts are hard, because they require reasoning about how others will respond to your actions, how you will then respond to theirs, what they will do next, and so on,” Barbu explains. “You eventually discover your first action was wrong, because later on it will lead to a likely accident. This problem gets exponentially worse the more cars you have to contend with.”

The model is able to capture enough data about the future behavior of other agents to cut off the analysis process early and still make good decisions to navigate the given obstacle. Efficient planning is key for these types of systems. The system was only programmed with a few examples of roundabouts featuring few cars and was able to extrapolate future navigation scenarios. Going through intersections and roundabouts are some of the most challenging aspects of autonomous vehicles; systems like the one from MIT will help future vehicles navigate the difficult terrain.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish