The camera mounts atop the miniscooter prototype vehicle. Systems are tweaked on the scooter and will then be mounted on an ATV for the International Robotic Racing Federation race in October.
The VC2028 camera features a 1,200-Mips processor with 16 Mbytes of DRAM and 2 Mbytes of Flash memory.
A team of students at the University of Maryland are using a smart camera from Vision Components, Hudson, N.H. (www.vision-components.com), as the brain and eyes of their robotic vehicle. Bumped from the Darpa Grand Challenge in March, they're now setting their sights on the newly founded International Robotic Racing Federation race in October.
The VC2028 smart camera sees and processes the speed at which objects move past the vehicle, thereby creating a 3D obstacle map. A DSP processes the 3D data to find the best path.
The CCD captures images and the algorithm tracks the motion. A combination of triangulation and accelerometer data create grid points to figure distance with a high level of statistical validity. An overall 3D terrain map is inferred by interpolating these grid points. The 3D map is then scanned to find a path. This path goes in real time to a DOS-based central computer running QBasic. The central computer then sends control signals to an Acces I/O amplifier unit at 56 kbytes/sec.
The team initially considered using several delicate and expensive CCD cameras, radar or Ladar systems, and connecting them to bulky high-end computers. But reliability concerns centered on the need for cooling and vibration isolation steered them toward the VC2028. It needs no cooling, even in the desert, and the packaging prevents moisture and shock damage.
A modified electroscooter now serves as a test vehicle and is currently used to test algorithms, controls, and steering. So far the vehicle has been tested with autonomous path-following and powerbalancing algorithms. These systems will be pulled off the prototype scooter and installed in an ATV for the race.