One book that details such motion-vision technologies, Motion Vision: Design of Compact Motion Sensing Solutions for Navigation of Autonomous Systems, is now available in the U.S. Published by the British Institution of Engineering and Technology, Motion Vision is written for designers working in controls engineering and looking to incorporate machine vision into their application.
The book outlines the problem of motion estimation from biological, algorithmic, and digital perspectives. (Check out a preview of the first section on Google Books here.) It goes on to describe an algorithm that fits with the motion processing model, and hardware and software constraints. This algorithm is based on the optical flow constraint equation and introduces range information to resolve what's called depth-velocity ambiguity. It's a funtion that's key to autonomous navigation. This section to be heavy stuff, but for those that are interested, online there's copious information about the constraint equation and (thanks, Wikipedia) optical flow in general as well.
Motion Vision also explains how to implement algorithms in digital hardware, including details related to initial motion-processing models, the hardware platform, and the systerm's global functional structure.
In Chapter 5, the book gives a thorough review of motion estimation to avoid collisions through the tracking of position and approximate velocity. It's a description of a few technologies already being put to work in alternate forms in the Google car, Catapillar's self-driving haulers, and Komatsu's autonomous trucks.
Motion Vision ends with a 100-page appendix that details all the circuitry and software of the FPGA for controls and vision that the authors use as a reference example. The appendix also details the software design — which is modular — so engineers reading the book can actually use pieces of it in hardware and I/O modules of their own specification.