In many cases, the network is the business. This is true not only from an information perspective but also from a production and control perspective. Uptime is certainly one measure of network success, while downtime costs companies millions of dollars a year. However, simply being up and running isn’t enough.
Achieving 24/7 network uptime is no small feat, to be sure, but it does not guarantee the network uptime and productivity is being maximized. Traditionally, downtime is the foe we’re trained to avoid. However, “slowtime” — the time your network is running at less than full speed — adds up to hundreds of hours a year as a chronic, and usually unseen, productivity killer.
To provide high-speed, near deterministic performance (which may be mandatory in some industrial or control networks) network traffic must be minimized. Minimizing traffic requires optimization of the entire communications network and is accomplished by following proper installation guidelines, selecting the proper materials, reducing the influence of external interference, and enhancing the electrical properties of the cabling infrastructure.
While network topology variations, increased wire speeds, and the proper selection of electronics can minimize collisions and their effects, it is also important to minimize the number of errors caused by either deficiencies in the communications infrastructure or the operating environment itself.
Green means “go,” but how fast?
It’s no secret that Ethernet, the most commonly used access method in the world, is rapidly infiltrating plant floors and industrial facilities. However, deploying Ethernet into the plant floor requires the next level of communications and cable system performance.
At each end of a network connection is a visual indicator or green light that tells whether the device is “up.” Either you are on or off, connected or not. What these lights don’t do is indicate whether the network is running at top speed. They merely indicate whether the road to your hub, switch, or node is accessible.
Passive equipment (anything that does not generate or read a signal), including cabling systems, plays an extremely important role in determining how well a network performs. In fact, Anixter Inc., Skokie, Ill., has found that an unmatched or poor-quality network cabling system can be the single biggest contributor to network inefficiency and is often the cause of the infamous “file server not found” and other nasty messages.
In an unmatched system, components are designed and built to different specifications by assorted vendors, and plugged together. If the components aren’t electrically matched, overall network performance can suffer dramatically. In some situations, cable from one manufacturer may perform less than half as well as cable from another, even though they both carry identical standards body ratings.
As bandwidth-hungry and critical applications like control, management information systems, video, and engineering or resource planning systems become necessary, so does optimum network performance. If the cabling doesn’t allow for the most efficient transfer of data, the limitations will become clear and frustration can mount. More importantly, if the plant network is mission-critical, less-thanoptimum performance is simply unacceptable.
Whenever the forces and parameters that affect a network are not aligned precisely and working in sync, errors occur. As a result, data must be re-sent repeatedly. Network engineers may suggest replacing the electronics, but before that’s done, tests should be performed to ensure the cable itself is not the source of errors and that there is enough headroom to allow slightly damaged data to get through.
When shopping for new cable, check that it has been tested to every possible standard. Better yet, check that it exceeds the standards. Finding a system that will last as long as needed is difficult, but selecting one that has been tested with live data (not just for electrical characteristics) can help. Passive analog/ MHz examination of cable will reveal the theoretical size of an information pipeline. However, only active testing with live data (digital Mbits/sec) will produce results closer to real-world performance.
Quite obviously, many conditions can hinder a network’s performance, especially in industrial environments. The effects of temperature and electrical interference from motors, drives, transformers, contacts, etc. introduce another set of variables to be considered when selecting an industrial cabling system.
Continue on page 2
Recently completed tests prove that some plant-floor operating conditions (noise, temperature, etc.) can negatively affect the performance of a cabling system and, consequently, increase error rates. In addition, deficiencies in a cabling system’s electrical properties, like impedance and common mode noise rejection can further add to error rates. As one example, the effects of worst-case impedance and capacitance unbalanced, coupled with the effects of temperature, can degrade system performance by anywhere from 7 to 14 percent.
Some issues to consider, among others, when building a network vision plan:
• What applications are running today?
• What will be running tomorrow?
• Will the network support new high-speed control and information systems?
• Is there an increasing dependence on the network?
• How long is the cabling infrastructure expected to last?
While standards ratings are helpful, they are minimum requirements and ensure only minimal performance. To better meet the needs of the future, it’s important to specify cabling systems that test out above the current requirements of the network. To create and maintain a high-quality industrial network, find a cabling system that’s tested and certified well beyond the mediocre and one that addresses the unique requirements of industrial environments.
What can disrupt data transfer?
• Bends in the cable greater than 45° (usually at connection points or in the horizontal run)
• Stripping off too much jacket for termination, as outside sheathing is the only thing that holds the lay pattern in place
• Cracked or torn jacket
• Using patch cords that are different from the link cabling
• Not terminating shield; improper grounding
• Using the wrong NEC rating
• Mislabeling or no labeling of cables
• Pairs out of connector blocks; poor connections
• Mixing 568A and 568B connections
• Using 25-pair non-power sum breakout cables
• Block-to-block connections made with unjacketed cross-connect pairs
• Wire breaks
• Not using stranded patch cords oversolid patch cords – stranded are more flexible and durable
What causes slowtime?
• Temperature too high or low
• Humidity too high or low
• High RF interference
• Weak NICs
• Poor configuration/routing
• Slow PCs
• Increased network traffic
• Power users/bandwidth hogs
• Unmatched system
• Bad patch cords
• Cable stripped too much
• Cable ties too tight
• Low attenuation
• Poor near-end crosstalk
• Poor attenuation-to-crosstalk ratio
• Poor delay skew (data arrives at different times)
• Poor return loss (also known as “echo”)
• Poor equal-level far-end crosstalk numbers for cable, connecting hardware, and channel
• Splices in cable