Next-generation data centres
While emerging technologies like 40 and 100 gigabit ethernet, fibre channel over ethernet, IP convergence and server virtualisation offer benefits for data centres, understanding how they work, the implications for the data centre and the strategies and solutions that will best support them can enable data centre managers to ensure that their infrastructures are ready and able. Compounding the onset of these technologies is the need to lower total cost of ownership.
High-speed links in data centres are vital for transmitting increasing amounts of information between more and more devices. Data transmission is growing so significantly that it’s expected to be six times larger in 2012 than it was in 2007. According to Cisco forecasts, overall IP traffic is expected to grow to over 45 billion gigabytes per month by 2012.
To keep pace with growing data transmission, data centres today are experiencing increased bandwidth and server requirements. Server bandwidth requirements are forecast to move from 10 to 40 Gbps in the next five years, and reach 100 Gbps within the next decade.
The need to reduce the total cost of ownership (TCO) is also critical. Operational costs alone can account for 50% of total costs; but by increasing efficiencies, that cost can be significantly reduced. So, having solutions in place today enables upgrades tomorrow that significantly reduce TCO.
Technological advancements like those outlined in the introduction also have implications for data centre infrastructure, including new cabling and connector solutions, higher fibre densities, higher bandwidth performance, and the need for enhanced reliability, flexibility and scalability. Fortunately, many solutions and strategies are available today that can help data centre managers prepare while simultaneously lowering TCO.
40 and 100 GbE
Standards for 10 GbE over fibre and copper already exist, with many data centres running these applications in their backbone cabling where large numbers of gigabit links aggregate. However, emerging server technologies and enhanced aggregation are calling for even faster connections. In response, IEEE is developing standard 802.3ba that will support data rates for 40 and 100 GbE. This standard addresses multimode and singlemode fibre, as well as very short distances over four lanes of shielded balanced copper cabling.
Copper
Transmitting 40 or 100 GbE over shielded copper cabling will require 10 Gbps over each lane (four lanes for 40 GbE and 10 lanes for 100 GbE). This will likely be limited to very short distances of approximately 10 m for equipment-to-equipment connections and will likely not be intended for backbone and horizontal cabling.
Multimode fibre
To run 40 GbE over 100 m of multimode fibre, the standard requires parallel optics with eight multimode fibres transmitting and receiving at 10 Gbps using an MPO-style connector - a high-density, multifibre connector that terminates up to 12 fibres in one connector. Because only eight fibres are required for 40 GbE, the other four fibres won’t be used.
Running 100 GbE over multimode fibre requires 20 fibres transmitting and receiving at 10 Gbps within a single 24-fibre MPO connector, or two 12-fibre MPO connectors, with four fibres unused.
Within data centres, 40 and 100 GbE over multimode fibre will also require MPO connectors, but with significantly more fibre - six times more for 40 GbE and 12 times more for 100 GbE.
MPO connectors will be required to support 40 and 100 GbE over multimode fibre. These connectors are typically factory pre-terminated to multifibre cables, purchased in predetermined lengths. This requires careful planning to ensure exact measurements, or the use of proper slack management. Data centres already deploying MPO connectors for better management and density will be better prepared for 40 and 100 GbE.
To run 40 and 100 GbE per the proposed IEEE standard, bandwidth performance will require a minimum of OM3 laser-optimised 50 µm multimode fibre. Reduced insertion loss and minimal delay skew will also be key considerations for 40 and 100 GbE. Installing high-performance fibre cable and components today is therefore vital to supporting 40 and 100 GbE tomorrow.
With up to 12 times the amount of fibre needed to support 40 and 100 GbE, managing fibre density is critical for future data centres. Issues like planning physical spaces, properly managing and routing large amounts of fibre in and above racks, and smaller diameter solutions that can save space and enable higher density should be considered.
Singlemode fibre
Running 40 GbE over singlemode fibre will require two fibres transmitting 10 Gbps over four channels using wavelength division multiplexing (WDM) technology. Running 100 GbE will require two fibres transmitting at 25 Gbps over four channels using WDM.
WDM combines multiple signals on a single fibre using different wavelengths. Multiple signals, each with their own wavelength, are transmitted on the fibre and combined by a multiplexer at the source end, then de-multiplexed at the destination. This provides a scalable way to increase the capacity of existing singlemode fibre infrastructure.
While WDM for 40 and 100 GbE over singlemode fibre will be ideal for long-reach (to 10 km) and extended-reach (to 30 km) distances, it will likely not be cost effective for shorter 100 m distances like those in campuses or data centres. However, as the standards are finalised and equipment is introduced, data centre managers would be wise to examine the cost differences between singlemode, multimode and copper cabling solutions for 40 and 100 GbE.
FCoE
Over the past decade, most data centre managers and storage equipment manufacturers have adopted fibre channel as a means of transmitting data for SANs. This highly reliable, low-latency technology allows simultaneous high-speed communications among servers and data storage systems via fibre. However, most data centres use ethernet for transmitting client-to-server or server-to-server. To support both fibre channel and ethernet, data centre managers have had to deploy parallel infrastructures, which increases cost and manageability concerns.
FCoE is a new standard that aims to consolidate both SAN and ethernet onto one common network interface, enabling the use of the same cable for both purposes, improving server utilisation and cable management, and reducing port numbers and power consumption. FCoE works by encapsulating fibre channel frames within ethernet data packets.
To support FCoE, higher bandwidth solutions will be required, as well as flexibility to support uncommon data centre configurations that will likely be deployed.
FCoE requires 10 GbE as a minimum, which means that anyone planning for FCoE must deploy cabling capable of supporting 10 GbE. FCoE will likely be deployed using top-of-rack switches that provide access to the existing ethernet and fibre channel SANs. This is different to the centralised approach most data centres currently use. While top-of-rack reduces cabling, it also requires flexibility and manageability because reconfigurations need to be made within each rack rather than at a centralised location.
IP convergence
Voice, data, video, security and building management systems have now become digital, allowing all forms of traffic to converge over a common infrastructure using IP technology. Voice and data are now commonly converged using VoIP, while other IP-based applications like video over IP, access control systems and building automation systems are beginning to be deployed.
As IP convergence and the number of networked devices continue to grow, data centres will see significant increases in the amount of cabling and equipment to support new applications, as well as the need for increased reliability. There will also be more cabling in horizontal pathways, so they must be properly sized to accommodate the extra cabling while also enabling adequate cable management and room for growth. Smaller-diameter cabling can go a long way in saving costly pathway space.
To support more applications like video surveillance and building automation, data centres will require more equipment, space and management. Because they are mission-critical systems, downtime can cause life-safety situations and simply cannot be tolerated. The IP-converged network therefore requires extremely reliable components and design strategies like redundancy that ensure availability.
Server virtualisation
Server virtualisation involves running multiple virtual operating systems on one physical server. It helps by reducing capital expenditure, maximising resources and space availability, improving server utilisation and reducing power and cooling.
Most enterprises deploying server virtualisation are consolidating applications onto one physical server at a ratio of 4:1, with some experts predicting the ratio to grow to as much as 20:1. With so many applications running on one physical server, the need for availability and bandwidth increases significantly.
Since downtime could limit access to several applications, redundancy is needed to maintain server availability, which requires a second set of cables running to a back-up network interface.
While server virtualisation theoretically reduces the number of servers and cabling in data centres, redundancy and greater bandwidth to support it actually requires more cabling. Furthermore, the demand for capacity is outpacing the gains provided by virtualisation, so the rate of growth in the number of servers and associated cabling continues to increase.
Lower TCO
While most new technologies aim at reducing TCO, the overall increase in data transmission and equipment is putting strain on data centre power, cooling and space. Today’s enterprises are struggling to find a balance between implementing new technologies and ensuring lower TCO through more efficient operations, reduced energy consumption and lower life-cycle costs.
The longer the data centre infrastructure can support changing technology needs, the lower the life-cycle cost of the components. Ensuring scalability and reliability across all components in the data centre is therefore paramount.
Strategies and solutions
With several imminent technologies on the horizon, data centre managers should consider available solutions and strategies that will better support new data centre technologies. The optimum cabling solution should support MPO solutions, high-density cabling and connectivity, high bandwidth and performance, and enhanced reliability, flexibility and scalability - all of which ultimately lower TCO.
MPO solutions will be a must-have for 40 and 100 GbE. Thankfully, data centre managers have become increasingly comfortable purchasing predetermined lengths of multifibre cables pre-terminated with MPO connectors. These connectors should be factory tested in a clean environment to ensure precise performance for 40 and 100 GbE. They also ensure lower TCO because they offer significantly reduced labour costs versus field-termination or splicing and are fast and easy to install.
Some new technologies will require more fibre links in data centres, necessitating high-density solutions that properly manage high fibre counts and provide scalability to support more fibre cabling. Because MPO connectors terminate up to 12 fibres in one connector the same size as a single-fibre SC connector, they maximise space savings.
Smaller cable diameters can also help facilitate higher densities in cable management and pathways. Pre-terminated MPO cables are a small, round, loose-tube configuration that includes 12 fibres in a 3 mm jacket, slightly larger than traditional 2-fibre cables in 2 mm jackets. Round loose-tube cable is also easier to manage and route through pathways than traditional multifibre ribbon cables. Smaller cables help reduce cable blockage in cabinets, allowing improved airflow in and around equipment for optimum cooling and less energy consumption.
Several imminent technologies call for higher bandwidth cabling and precise performance. Because fibre cabling is backward-, not forward-compatible, it is critical to choose fibre today that will support future bandwidth requirements.
Proper cable management is required to maintain reliability, flexibility and scalability of cabling and connections in data centres, and ultimately to lower TCO. Cabling and connectivity needs to be deployed with proper bend-radius protection to reduce signal attenuation, to maintain fibre performance, to improve cable routing and accessibility to work on connectors and cables without affecting adjacent circuits or ports, and to provide physical protection for cables and patching. Without end-to-end cable management, cables can pile up in raceways, maximum bend radius can be exceeded, connector access can be difficult and it can take hours to trace cables - all which impacts the ability to support and deploy new technologies.
Proper cable management lowers TCO by enabling proper flow of cool air into and out of the cabinet, improving equipment life cycle and reducing the need for additional cooling that increases energy consumption. Cable management that provides accessibility to cables and connectors also makes it easier to locate components during network reconfiguration, saving time and reducing operation costs.
Summary
Preparing data centres for imminent technologies is more cost effective than trying to unsystematically deploy solutions later on to accommodate new technologies. IT professionals upgrading data centres should consider MPO solutions, high-density solutions, higher bandwidth cabling and cable management that ensure reliability, flexibility and scalability.
With lower TCO as a top concern, many of the solutions available today not only prepare for next-generation data centres but also enable more efficient operations, reduce power consumption and lower life-cycle costs.
Powering data centres in the age of AI
As data centres are increasingly relied upon to support power-hungry AI services and...
Smart cities, built from scratch
With their reliance on interconnected systems and sustainable technologies, smart cities present...
Smart homes, cities and industry: Wi-Fi HaLow moves into the real world
Wi-Fi HaLow's reported advantages include extended ranges and battery life, minimised...