Follow the Bouncing Ball

Sept. 25, 2008
DSPs make stand-alone visionbased object tracking easy.

Who, What, Where
Authored by Sokol Petushi
[email protected]
John Lehman
[email protected]
Confero Solutions Inc.
King of Prussia, Pa.
Edited by Robert Repas
[email protected]
Key points
• Embedded DSP vision tracking
• Model-simulation software
Resources
Confero Solutions Inc.
conferosolutions.com
EPIX Inc.
epixinc.com
The Mathworks Inc.
www.mathworks.com

Prosilica Inc.

prosilica.com

Texas Instruments

ti.com

Embedded-vision systems integrate control software and processing hardware with the camera imager to form a stand-alone system that does not require an external PC. This helps minimize system hardware, size, and potentially cost, making it well suited for applications that require minimal user interaction and mobility. For instance, embedded-vision systems can track laboratory mice in behavior studies, live-cell cultures for in vivo assays and drug-behavior studies, the motion of vehicles and boats for surveillance, and manufacturing automation. However, development of embedded-vision systems becomes a challenge when complex processing and high performance are needed in a stand-alone system.

For example, conveyor-based delivery of products to a pick-and-place robot requires the robot to not only recognize the product, but also its orientation, placement, and possible shift in position along the conveyor. And it must accomplish all this while the conveyor is in motion. Embedded-vision-tracking systems can handle the demand, but the development of those systems are typically expensive and difficult to program. New vision hardware coupled with advanced, rapidprototyping software tools and DSPs provide the means to bypass these obstacles to get the same results with lower costs and shorter development times.

To see if the concept has merit, a prototype vision- based tracking system was developed using a Texas Instruments (TI) TMS320C6455 digital signal processor (DSP) and a subset of fast prototyping- software tools from The Mathworks. Other items used in the proof-of-concept experiment include a Gigabit Ethernet interface (GigE Vision) camera, a DSP starter kit (DSK) for the TMS320C6455, and a pendulumlike setting of five small-diameter spheres. The spheres help determine the processing capability of the C6455 for simultaneous tracking of multiple random motion objects.

The GigE camera was a Prosilica3 GE1380 with a 1.4-Mpixel imager. A Dell Latitude D620 laptop handled software development and connected to the camera through an Ethernet/PCI adaptor.

Each sphere swings free as a pendulum and so oscillates about an equilibrium position. By starting each sphere swinging in a different direction, a more-or-less random movement occurs for the tracking application test as the spheres bounce off each other.

The Ethernet port in the laptop connects the DSK board through its integrated TCP/IP port. The USB port on the DSK board also connects to the laptop to transfer generated code to the C6455 DSP target and code debugging by using an on-board JTag emulator. Images are acquired in real-time from a Simulink model running in the PC. This model, dubbed the host model, calls the camera API functions to get the video image, packs the frames, and sends them through the TCP/IP port to the DSP. Simultaneously, the host model receives and displays results from the software running in the DSP.

Test-system software

The concept model design, simulation, embedded-code generation, and deployment to the DSP target used a combination of Mathworks software tools. The Real-Time Workshop Embedded Coder provides a framework for the development of production code and generates embedded code in ANSI-C or ISO-C formats. The Target Support Package TC6 software integrates Simulink and Matlab software with Texas Instruments eXpressDSP tools and the TI C6000 target software. The target software automates rapid prototyping on TI’s C6000 hardware. The Embedded IDE Link CC enables communication between Matlab/Simulink functions and blocks and TI’s Code Composer Studio (CC Studio) integrated development environment (IDE). Embedded Matlab is a subset of the Matlab language that supports code generation for deployment in embedded systems and accelerates xed-point algorithms. Finally, the Video and Image Processing Blockset lets vision designers prototype, graphically simulate, and generate code for video and image-processing algorithms. This composition of Mathworks software tools together with TI’s Code Composer Studio IDE creates a model design and prototyping environment for embedded-vision system. Following we describe our experimental hardware setting and software model, and show some results from this proof of concept experiment.

The vision-tracking algorithm runs as a DSP/BIOS single thread on the DSP in what’s called the target model. The “Main Thread” of this software is triggered each clock cycle. Basically, the main function monitors the integrated Ethernet port in the target, receives and unpacks the TCP/ IP data, and provides single frames to a “Dots Detection & Tracking” subsystem block. The results of this operation are packed and sent to the host model through the same Ethernet port.

The main vision-tracking algorithm for this model lies under the “Dots Detection & Tracking” subsystem block. Each single frame the camera generates passes through an automated optimal-thresholding block to separate the sphere object from the background. This is done by converting the input image to a binary mode with white pixels corresponding to the objects of interest. A median filtering block then smooths the edges of the segmented objects while a blob-analysis block calculates the centroid of each object and the smallest surrounding bounding box. The “Centroids Displacement” block identifies the centroid movement of the tracked objects between two consecutive frames. The “BBoxDraw” block draws the bounding box and centroid of the tracked objects in each frame. At the end, an annotation block overlays the numeric results in each processed frame.

One good thing about modeling in the Simulink environment is that embeddable C/C++ code is automatically generated at the end of the process. This takes place via the Real-Time Workshop Embedded Coder. The code transfers to TI’s CC Studio IDE using the Mathworks Embedded IDE Link CC. This process dynamically opens CC Studio, creates a project, and automatically populates it with the required headers and source files generated from the model by the Real-Time Workshop Embedded Coder.

The next step is to evaluate the performance and accuracy of the designed model. In the Simulink environment this takes place by first triggering the target program to run. This action initiates the thread in the DSP that monitors the TCP/ IP port in the hosting DSK. Next, the PC host model initializes the camera and starts acquiring, packing, and sending image frames to the target DSP. Both raw input video and the processed frames with overlaying numeric results can be displayed simultaneously for investigation or observation. The numeric results can serve as feedback to a controller of a larger integration and automation system.

For this test case, the vision-tracking system is set to track the first two spheres on the left side of each frame. Performance profiling, available in Simulink, and the Code Composer Studio lets designers analyze both the host and target models.

A comparison of the timing between the software running only on the PC and in the DSP shows that the software appears to perform faster as a single DSP/BIOS thread in the embedded DSP than on a PC running software that uses the same algorithm. There is room, though, to further optimize the model and the corresponding DSP and PC versions of the code.

 

The test setup for the experimental hardware used for vision tracking included an embedded DSP development starter kit, an Ethernet-based video camera, and five black spheres suspended to create pseudorandom motion.

 

The raw frame image on the left side becomes the processed frame image with numeric results on the right after passing through the DSP circuitry.

 

This block diagram details the host model used to acquire video from the camera.

 

The top-level target model outlines the process running within the DSP starter kit or DSK.

 

The vision tracking algorithm running on the DSK inverts the image and performs filtering and blob analysis to develop the area, centroid, bbox, and number of blobs (NrOfBlobs) to perform tracking analysis. The BBoxDraw and Annotation overlays the raw frame data for numeric display.

 

The performance of a similar PC-based vision tracking system was plotted against the performance of the DSP target system with images ranging from 5.0 to 0.7 Mpixels. The DSP system performed faster than the PC in all but the smallest of frame sizes.

Sponsored Recommendations

MOVI-C Unleashed: Your One-Stop Shop for Automation Tasks

April 17, 2024
Discover the versatility of SEW-EURODRIVE's MOVI-C modular automation system, designed to streamline motion control challenges across diverse applications.

The Power of Automation Made Easy

April 17, 2024
Automation Made Easy is more than a slogan; it signifies a shift towards smarter, more efficient operations where technology takes on the heavy lifting.

Lubricants: Unlocking Peak Performance in your Gearmotor

April 17, 2024
Understanding the role of lubricants, how to select them, and the importance of maintenance can significantly impact your gearmotor's performance and lifespan.

From concept to consumption: Optimizing success in food and beverage

April 9, 2024
Identifying opportunities and solutions for plant floor optimization has never been easier. Download our visual guide to quickly and efficiently pinpoint areas for operational...

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!