Bigger pipes for Ethernet vision

Sept. 28, 2006
The new GigE Vision spec could change the way industrial video and inspection systems are applied.

Leland Teschler

Applications of GigE Vision systems include assembly or filling lines, as illustrated by this demonstration at the recent Vision East show. Camera maker Dalsa Coreco, Ontario, Canada, devised this setup to show how its Genie GigE Vision cameras might work in such settings.

The back of a Spyder camera from Dalsa Coreco shows the typical connections found on GigE Vision equipment. Visible is the Ethernet connection, power plug, and I/O connector. Camera I/O generally includes connections to sources for triggering the acquisition of an image.

It may be time for industrial firms to rethink how they apply vision systems. The reason: A new standard for speeding vision signals over Ethernet lines could bring down the cost of real-time video. Real-time imaging has been reserved for applications that could justify its cost. That's because the only way of getting video or highspeed images back to a controller was through dedicated lines over special protocols. Such setups generally employed analog cameras and custom-made cabling routed back to a controller and high-end frame grabber. It was impractical to send such video signals over Ethernet lines except in special situations. Ordinary Ethernet has a top data rate of 10 or 100 Mbps. These rates are generally too low to handle video streams and signals from industrial vision cameras.

A new generation of Ethernet, however, has enough speed to handle such situations. Called Gigabit Ethernet, or just GigE, it has a maximum data rate of 1 Gbps. Interestingly, the first cameras able to work over GigE debuted about four years ago. Trouble is, many of them have not been able to make use of GigE for sending real-time image information. A typical approach has been to compress the image before sending it back to the host. The resulting compression and decompression of images limited the speed at which these systems-could operate.

More recently developed GigE cameras may contain enough computing horsepower to generate real-time video. But there has been no standard protocol-for how to send this data over GigE lines. So putting such cameras on to a network entailed some integration work. And cameras from different vendors followed slightly different protocols, complicating matters.

A recently developed vision standard addresses these interoperability issues. The standard, called GigE Vision, covers the network hardware interface, communication protocols, and camera control commands. Developers say the real benefit of GigE vision will be systems less expensive to implement than those using other connection methods with the same video bandwidth.

In particular, GigE cabling can cost far less than the wiring for analog cameras. Bandwidth available for cameras working on GigE with standard CAT-5e and CAT-6 copper cables is about 120 Mbytes/sec. This speed is below that available over the specialized Camera Link protocol, but higher than that of several other widely used camera interfaces such as IEEE 1394 (FireWire). And the protocol works up to 100 m without any repeaters.

Another benefit brought by GigE Vision is that the host controller no longer needs a frame grabber card. Most GigE Vision cameras come equipped with electronics that handle tasks once relegated to frame grabbers on a host controller. For example, GigE Vision cameras generally contain enough memory to store at least the last image acquired. They may also have onboard computing power for image manipulation tasks once relegated to frame grabbers. To talk with a GigE Vision camera, a host controller needs just a normal GigE NIC card and the software to exchange information with the camera and unpack the images sent back.

It can also be easier to trigger a GigE Vision camera than one connected via analog signals. The trigger signal for catching an image usually originates near the imaged scene. It can go directly to a GigE Vision camera, generally a short distance. In contrast, trigger signals on analog systems often have had to be routed back to the frame grabber on the PC. This necessitated a separate trigger wire back to the host controller. It could be cumbersome to make this connection.

Nevertheless, there are tradeoffs for using cameras over a network. They primarily concern possible delays or loss of transmitted signals caused by factors such as network traffic. The recently defined GigE Vision standard has provisions for handling these problems. Nevertheless, GigE Vision cameras must be connected to networks in ways that minimize the possibility of delays caused by other network traffic.

There is a likelihood some image-data packets will be garbled as they traverse the network. So the GigE Vision protocol defines a procedure for retransmitting lost packets. To minimize the delay associated with this retransmission, the protocol is simpler than that used for ordinary Ethernet data. In addition, makers of GigE cameras allow for the possibility of such retransmission by equipping their cameras with image buffers big enough to hold recent data most likely to be retransmitted.

GigE Vision makes use of what is called UDP (User Datagram Protocol) at the transport layer of the network definition. UDP is also a part of ordinary Ethernet, but most Ethernet installations instead use the well-known TCP (Transmission Control Protocol) to deliver data. The key difference between TCP and UDP is that TCP guarantees no packets get lost during transmission. It does this by means of a handshaking and retransmission arrangement. UDP, in contrast, is a simpler protocol. It is used in GigE Vision because it is quicker than TCP. But it cannot guarantee that packets won't get lost. GigE Vision spells out what to do about lost packets at higher levels of the network protocol.

Besides streaming out image data, the GigE Vision spec defines how equipment performs a number of housekeeping chores. These include setting the camera IP address, figuring out what vision devices are on the network (device discovery), setting up the camera for tasks at hand, and sending out notices of particular events.

Device discovery basically tells what kind of vision equipment resides on the network. GigE Vision cameras send back information about themselves to the network controller: their manufacturer, make, model, serial number, IP configuration, and so forth.

A special part of GigE Vision called GVCP (for GigE Vision Control Protocol) defines a method for sending commands to the camera. It also defines commands for reading and writing to camera registers. Through GVCP, the controller spells out details of the image acquisition process. GVCP also contains provisions for letting the camera notify the central controller when specific events happen. For example, the camera might tell the controller when it has received a trigger for imaging a target.

Data streaming in GigE Vision takes place through a protocol called GVSP (GigE Vision Stream Protocol). Both GVCP and GVSP come into play where image data has been lost on the network and must be retransmitted. Here, a central controller detecting a problem packet uses GVCP to ask the camera to try sending again. (This assumes the camera has a buffer big enough to hold the last image.) The camera then uses GVSP to retransmit.

One other aspect of GigE Vision is a requirement that cameras provide an XML file that tells which registers are used to control which features. The XML definitions adhere to a specification called GenlCam, which was developed by a committee of the European Machine Vision Association. Among the details spelled out in the XML file are the means of configuring image dimensions and pixel type, how to start and stop image capture, and similar items.

Equipment deployed on networks experiences a certain amount of delay, or latency, between the time it packetizes and sends data, and when the data arrivesat the destination and is reassembled. This latency can be problematic in networks handling real-time video streams.

Network hardware can also cause problems with streamed images. The principle difficulty arises if network traffic exceeds the peak capacity of equipment such as Ethernet switches or NIC cards. In this case, the Ethernet hardware may discard packets and, thus, garble the image data. Experts advocate balancing network traffic to avoid such scenarios by, for example, putting only one camera on each Ethernet subnet.

There are also different means available to minimize the time a controller must spend reassembling data packets sent by a camera. For example, it is possible to write an optimized GigE Vision driver that intercepts packets sent by a GigE Vision camera before they get processed by winsock.dll, the ordinary protocols used by MS Windows to handle regular Ethernet traffic. This approach cuts the time the central processor spends manipulating image traffic. The drawback is that the controller CPU still spends a lot of time processing images. For example, it must still perform operations such as stripping off headers from each packet of image information.

To avoid the CPU-intensive image manipulation all together, experts expect to eventually see specialized hardware for image reconstruction. This hardware might potentially also take over image-enhancement functions to further reduce the amount of network bandwidth necessary.

Inside GigE Vision

A simple GigE Vision system consists of a GigE Vision-compatible camera connected to a PC over a 1000BaseT Ethernet line. The only hardware at the PC needed for this connection is an ordinary Gigabit Ethernet NIC card. The GigE Vision hardware on the camera consists of a video buffer memory, logic for packetizing the image and managing the Ethernet connection, and a processor to supervise operations and interact with camera I/O such as trigger signals. (Alternatively, ordinary cameras can work over Gigabit Ethernet lines through use of special add-on interfaces.)

At the PC, software depacketizes image data and handles difficulties that may arise if camera signals get garbled while passing through the network. The PC can also interact with cameras by issuing commands defined by the GigE Vision protocol.

Developers of software for GigE Vision can approach application development in two different ways. The most straightforward way of devising GigE Vision applications is to work through the Winsock.dll, the dynamic link library for MS Windows that provides a common API for network applications. The problem is that the software overhead involved may make some applications too slow. A way around this difficulty is to write applications that interact with a lower level of the network protocol. Such software usually works with network protocol software called ndis.sys, which provides an interface between device drivers and protocol drivers.

When more than one camera is on a Gigabit Ethernet, the usual recommendation is to divide the installation into subnets with no more than one camera per subnet. This scheme minimizes the chances of collisions between image data coming from different cameras.

Finally, it is useful to contrast a GigE Vision setup with that of an analog camera. Analog cameras typically send RGB and camera I/O signals back to a frame grabber card installed in a PC. The frame grabber stores one or more images and interacts with I/O such as image acquisition triggers. A point to note is that cabling between the camera and the frame grabber tends to be custom made and necessarily short.

Automated Imaging Association,

Dalsa Coreco,

Sponsored Recommendations

How BASF turns data into savings

May 7, 2024
BASF continuously monitors the health of 63 substation assets — with Schneider’s Service Bureau and EcoStruxure™ Asset Advisor. ►Learn More: https://www.schn...

Agile design thinking: A key to operation-level digital transformation acceleration

May 7, 2024
Digital transformation, aided by agile design thinking, can reduce obstacles to change. Learn about 3 steps that can guide success.

Can new digital medium voltage circuit breakers help facilities reduce their carbon footprint?

May 7, 2024
Find out how facility managers can easily monitor energy usage to create a sustainable, decarbonized environment using digital MV circuit breakers.

The Digital Thread: End-to-End Data-Driven Manufacturing

May 1, 2024
Creating a Digital Thread by harnessing end-to-end manufacturing data is providing unprecedented opportunities to create efficiencies in the world of manufacturing.

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!