Machine Design
3D Modeling Gets a Boost from Kinect

3D Modeling Gets a Boost from Kinect


Kreylos projects and code,

Authored by:
Leslie Gordon, [email protected]

Not all hackers are trying to steal credit-card information or personal data. Many of them, in fact, are helping society. They are developing open-source code for video-game interfaces such as Microsoft’s Kinect. The resulting applications are often innovative ideas the folks in Redmond, Wash., never imagined.

Kinect is designed to replace the game controller that normally plugs into the Xbox 360. It lets users play computer games such as volleyball using onscreen avatars they control by just moving their own bodies. The device’s cameras, sensors, and software let it detect movement, depth, and shape and position of the human body.

But Kinect hackers have taken the device far beyond its gaming roots. Programmers such as computer scientist Oliver Kreylos from the University of Calif., Davis, have no interest in playing gesture-based video games. Instead of plugging a Kinect into an Xbox, he hooks it to a computer. The Kinect’s “eyes” are actually a pair of cameras. One camera detects depth and another picks up colors. Each uses a metal-oxide semiconductor image sensor. Kreylos converts Kinect into a 3D camera by combining depth and color-image streams and projecting them into 3D space. Data reconstructed in such a way look like real 3D objects inside the camera field-of-view. The result is what Kreylos calls a 3D holographic image.

Kreylos says he had been looking for inexpensive cameras like those in Kinect for visualizing scientific data. He contacted PrimeSense in Israel which developed the original technology that later became Kinect. “Unfortunately, this happened shortly after PrimeSense had entered an exclusive agreement with Microsoft,” he says. “So although I could not purchase the cameras back then, I have known exactly what Kinect could do for the last two years or so.”

Kinect’s depth camera is paired with a near-infrared laser projector, which spreads a pattern of dots over the room and its occupants. The camera detects this pattern and, based on differences between preset and observed pattern, calculates the distance to each visible dot. From this data, it builds a “depth image,” which contains a distance value for each pixel. This method is a variation of the well-known “structured light” approach in which a scene’s 3D structure is uncovered based on distortions in a projected pattern. Color is added to the 3D model via RGB input from the image sensor.

In hacking into Kinect, Kreylos wrote all his own code in C++, except for the sequence of control commands that goes to Kinect to initialize its cameras and start recording. “I had no access to these commands because I do not own an Xbox console,” he says. “So I used the sequence extracted by Hector Martin, the Spanish engineering student who won the Kinect hacking prize for being the first to create an open-source driver.”

“The term ‘holographic’ is actually technically incorrect when referring to my images,” admits Kreylos. “But I felt I had to use it to distinguish the true 3D video coming from Kinect from pseudo-3D stereoscopic videos such as those shown in ‘3D’ movies such as Avatar, or captured by ‘3D cameras.’ The difference is that pseudo-3D video can only be viewed from the point of-view of the camera that originally recorded it, just like regular 2D video. True 3D video, on the other hand, can be viewed from any arbitrary viewpoint, even after it has been captured. It has this property in common with real holograms, hence I chose the moniker as a short-hand.”

Kreylos explains that Jim Cameron could not go into the editing room after shooting live-action scenes in Avatar, and see those scenes from several points of view.

“Although this is possible with true 3D video, a 3D-video camera is still a camera and cannot see through or around solid objects. This means users would need several Kinects to get the full effect,” says Kreylos. “For example, because a 3D camera can only create a 3D model of what it sees, a 3D holographic model of me would only show one side of my body. We call this half-representation a ‘facade.’ Using several cameras would let technicians combine multiple facades into complete 3D objects, an ongoing area of my research.”

Users can save 3D models, either as snapshots or animations, and edit them freely, according to Kreylos. “That’s because the models are just a bunch of triangles in 3D space, exactly the same format used by typical 3D modeling and animation software,” he says. “CAD software is somewhat different, but the models could be converted to a format for editing. In fact, it’s likely that Kinect, or similar 3D cameras, will soon be used as low-cost 3D scanners for engineers and hobbyists. There is still a good amount of software to be developed, but it should happen.”

As for other potentially creative uses for Kinect, Kreylos says, “We are only seeing the tip of the iceberg. There are already demos of devices including hands-free Web browsing interfaces, virtual keyboards, and autonomous aerial vehicles.”

© 2011 Penton Media, Inc.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.