Improving data centre speed and efficiency
Cloud computing and bandwidth-intensive applications have made the data centre more important than ever, and managers want to squeeze every last bit of performance out of its architecture, even down to the connector level.
In today’s environment, data centres are gaining importance due to the trend of outsourcing access to data (through the cloud) while continuously supporting bandwidth-intensive applications (such as video). Data centre managers want to squeeze every last bit of performance out of the data centre architecture, even down to the connector level. Five key criteria need to be considered when choosing input/output (I/O) connectors that maximise speed and efficiency in data centres — flexibility, cost, thermal management, density and electrical performance. These five criteria must also be optimised in the equipment’s backplane and power connectors.
Flexibility
The I/O connector should offer maximum flexibility in the choice of cable type needed for each application. For example, suppose there’s a rack of servers that all connect to a top-of-rack switch. Most of these connections are fairly short — typically three metres or less — so it’s less expensive to use copper cable. But some connections may be longer and require optical cable. By using a pluggable form factor connector such as SFP+, SFP28, QSFP+ or QSFP28, the manufacturer gives the data centre operator the ability to choose the right cable to meet specific needs.
Cost
Based on industry trends, a server’s interconnect might be 1 Gbps, but in some of the more demanding applications servers now support 10 Gbps or even 40 Gbps. The 40 Gbps connections have been around for a couple of years, but the latest trend is to go to a 25 Gbps solution. The 40 Gbps solution implements four lanes of data at 10 Gbps each, so the manufacturer can build ‘intelligent’ equipment that can take the data, break it up over four lanes and then reassemble the stream into 40 Gbps. In contrast, 25 Gbps uses a single lane, so it has lower overhead and makes for easier implementation in the server and the switch.
Thermal management
When you take a copper cable assembly and replace it with an optical module, the signal is converted from electrical to optical, so the module is now dissipating power. This may be less critical on a server where there are only one or two interconnects, but it’s a significant factor on a switch where there might be up to 48 interconnects. Thermal management becomes critically important because now the equipment has 48 little heaters adding to the heat already generated from internal components.
With optical interconnects, manufacturers need to optimise for a new set of dynamics, and they need optical modules that dissipate less power and I/O connectors that can help to manage that thermal load.
Density
On switches, connectors must be as small as possible to provide the highest I/O density while still accommodating optical modules with the abovementioned thermal loads. Customers desire 24, 48 or even more connections in a 1RU chassis. One way the industry has responded is with the new μQSFP (micro QSFP) connector. An industry consortium is now defining this new connector standard to enable not only higher density, but also better thermal management, enabling up to 72 ports per 1RU chassis.
Electrical performance
Although standards dictate the overall performance of an interconnect channel (loss of host + connector + cable assembly, etc), connector manufacturers also differentiate their products by delivering enhanced signal integrity performance. For example, a better-performing connector or cable assembly provides more design margin to the equipment designer to enable longer channel reaches or lower-cost PCB materials. Connectors are being shipped today with multiple 25 Gbps pairs for 25, 100 and 400 Gbps applications, and they are in development or shipping now with 50 Gbps pairs as well.
Backplane connectors
As equipment needs to support higher densities of I/O performance, its backplane also must support the increasing aggregate data rate. With a line card that supports 24 or 48 100-Gigabit ports, a backplane connector with adequate capacity is needed. Equipment manufacturers need next-generation backplane connectors that support 10, 25, 50 Gbps and beyond of bandwidth per differential pair.
In fact, the backplane is the first thing equipment designers think about. They’re going to sell this equipment to large network providers who want that equipment to last for as many years as possible. If they can design a backplane chassis so it can support a first-generation line card at 10 Gbps, and a second-generation line card can plug into the same chassis at 25 Gbps, then 50 Gbps, then 100 Gbps, the same equipment can be retained in that data centre for a long time — only the line cards need to be replaced.
Power architectures
The equipment development engineer is also focused on the power delivery architecture. As discussed, higher bandwidth and higher I/O density lead to higher power requirements. Connector suppliers enable these power architectures with higher-density, lower-loss (voltage drop) power connector systems for busbar, backplane or cabled power delivery architectures.
Connectors matter in data centre equipment designs. By using the above criteria, network equipment makers can have a significant impact on their products’ efficiency and performance. The newest generation of electrical connectors allows equipment developers to keep up with the challenging demands of our highly connected world.
Powering data centres in the age of AI
As data centres are increasingly relied upon to support power-hungry AI services and...
Smart cities, built from scratch
With their reliance on interconnected systems and sustainable technologies, smart cities present...
Smart homes, cities and industry: Wi-Fi HaLow moves into the real world
Wi-Fi HaLow's reported advantages include extended ranges and battery life, minimised...