Product Design Engineering

Drones learn to observe environment and steer clear

About a year and a half ago, maker-tinkerer and technical writer Paul Wallich created a buzz when he wrote about the drone he built to walk his son to the bus stop each morning. The unmanned aerial vehicle (UAV) took the form of a quadrotor (four-rotor) copter, and did the job ... mostly — but Wallich admitted to having troubles with getting the UAV to power through windy days and avoid collisions.

Well, drone technology advances by leaps and bounds.  Now researchers think they have a new way to make these UAVs and other vehicles more nimble in the face of environmental variables. The programming uses neuromorphic sensors (that get triggered by sudden events) instead of the standard inertia-measuring sensors (such as accelerometers and gyroscopes) to track motion.

Even autonomous vehicles with cameras and controls need time interpret camera data about the environment. Here, state-estimation algorithms identify image features first (usually boundaries between objects through shade and color differences) then select a subset unlikely to change with new perspectives. Some dozen msec later, cameras fire again and the algorithm attempts to match current features to previous ones. Once the algorithm matches features, it calculates the vehicle’s change in position. The sampling takes 50 to 250 msec depending on how dramatically the environment changes, and the whole control cycle to correct course takes 0.2 sec or more — not fast enough to react to sudden changes in a vehicle’s surroundings.

Neuromorphic-sensor-based designs give autonomous vehicles ultra-fast reaction times — something current models lack.
Neuromorphic-sensor-based designs give autonomous vehicles ultra-fast reaction times — something current models lack.

To address this limitation, researcher Andrea Censi of MIT’s Laboratory for Information and Decision Systems and others have developed a way to supplement cameras with a neuromorphic sensor that takes measurements a million times a second.

Censi and colleagues presented the new algorithm at the International Conference on Robotics and Automation earlier this year. Vehicles running the algorithm can update location every 0.001 sec to make nimble maneuvers. "Other cameras have sensors and a clock, so with a 30-frames-per-sec camera, the clock freezes all the values every 33 msec" says Censi — and then values are read. In contrast, neuromorphic sensors let each pixel act as an independent sensor. “When a change in luminance is larger than a threshold, the pixel … communicates this information as an event and then waits until it sees another change."

The algorithm tracks every change in luminance every 1 µsec and supplements camera data with events, so doesn’t need to identify features. Comparing the before and after of a situation's change is easier, because even dynamic environments don't change much over a µsec. The algorithm doesn’t match all the features in the previous and current situation at once, either — but instead generates hypotheses about how far the vehicle moved. Then over time, the algorithm uses a statistical construct called a Bingham distribution to pick the hypothesis that’s confirmed most often and track vehicle orientation more efficiently that other approaches.

Recent experiments with a small vehicle fitted with a camera and event-based sensor show the algorithm is as accurate as existing state-estimation algorithms. Censi says with that done, the next step is to develop controls that decide what to do based on state estimates.

What's most interesting is that the algotrithm is said to work particularly well for making quadrators with only onboard perception and control nimbler. So maybe it's time for Wallich to perfect his son-walking UAV at last.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.