Simple measures to reduce energy consumption in data centres
Thursday, 04 February, 2010
The continued thirst for energy is a recurring story in news headlines every day - global warming forecasts rising temperatures, melting ice and population dislocations due to accumulation of greenhouse gases in our atmosphere from use of carbon-based energy. There are arguments for and against the dire predictions of global warming, yet one fact is undeniable - over the past 10-20 years, earth’s inhabitants are collectively consuming more energy at a faster rate than ever before.
Nowhere is the increasing consumption of energy more apparent than in the data centre, where power consumption has doubled in the past five years and is expected to rise at an even steeper rate in 2010. One of the most significant culprits of steadily increasing power requirements is the proliferation of servers in data centres. For example, according to the Worldwide Server Power and Cooling Expense 2006-2010 Forecast by IDC, the average small-to-medium size server required 150 W of power in 1996. These servers are expected to require over 450 W by 2010.
Of course, increased power requirements has a by-product of creating more heat, which necessitates cooling. One survey of IT executives shows that 45% of data centre energy consumption goes to chiller/cooling towers and computer room air conditioners (CRACs).
According to IDC, in the year 2000, for every dollar spent on new servers, 21 cents was spent on power and cooling. By 2010, IDC predicts that every dollar spent on new servers will require 71 cents on power and cooling. This massive increase has led to the formation of industry consortiums such as The Green Grid in the USA and the Green Building Council of Australia that are specifically focusing efforts on lowering power consumption in buildings, including the data centre.
For some businesses, increased energy costs are merely considered a cost of doing business. Yet there is a point at which these costs dampen profits and limit investment needed to grow and modernise a business. Worse are shortages of electricity occurring in some areas around the world that prohibit businesses from expanding data centre operations to keep pace with their growing company.
There are many ways to promote conservation of electricity in the data centre. For example, server virtualisation allows multiple applications to run on individual servers, which means fewer servers to power and cool. In practice, a data centre may be able to reduce the number of servers from 70 to 45, for example. Virtualisation recognises that a server gives off 100% of its heat if it is 20% or 90% in use. This dramatically reduces power and cooling costs across the data centre.
Yet there are many other ways to reduce power and cooling costs in the data centre - ways that are far simpler and less expensive to implement.
Airflow management in cabinets
New server platforms can support 800-1000+ optical fibre terminations or 600-1000+ copper cable terminations per chassis. The prospect of crowding too many cables into vertical managers poses a problem for thermal management in cabinets.
When air cannot properly circulate in the cabinet, data centre fans are called upon to move more air and cooling units to lower air temperature - both of which consume additional, unnecessary electricity.
For years, the IT industry has promoted the benefits of increased rack and cabinet density. Servers are smaller than ever and more can fit into the same space. The rationale has always been to make the best use of data centre floor space. Yet today the balance is shifting. New servers are consuming more energy than ever before, causing data centre and facility managers to weigh spiking operating costs, due to more energy usage, against the capital cost of ‘wasted’ space of lower-density configuration in raised floor environments. Instead of just focusing on density, energy efficiency demands that data centre and facilities managers look at managed density.
Managed density recognises that there really is a limit to the number of cable terminations and servers that can safely and economically be housed in cabinets. A prime issue is potential blocking of airflow caused by too many cables within the cabinet. One solution is to limit the number of servers and cable terminations in a cabinet, especially in copper racks where cable diameter is larger. Another is to employ basic cable management within the cabinet, such as securing cables along the entire length of vertical cable managers to open airflow. Similarly, integrated slack management systems locate and organise patch cords so that maximum space is available for flow of cool air into and out of the cabinet.
Using smaller-diameter copper cable is another means to improve airflow within the cabinet. For many data centres, copper equipment terminations are still prevalent, especially with the ability to push 10 Gbps over category 6A cabling.
The choice of copper cabling can impact airflow because some cables have a much smaller outside diameter. With proper cable management and smaller diameter cables, cable fill ratio for vertical cable guides of 60% supports higher density configurations without compromising airflow; higher server density is possible without inducing added electricity use for fans and cooling equipment.
Airflow management in the data centre
There are many simple solutions to improve overall airflow efficiency in the data centre that can be implemented immediately and without major changes in the design and layout of the data centre. In general, unrestricted airflow requires less power for cooling efforts.
Each incremental improvement results in less energy to cool equipment - reducing costs and limiting output of greenhouse gases from the power company. These simple solutions include:
- Plug unnecessary vents in raised floor perforated tiles;
- Plug other leakages in the raised floor by sealing cable cut-outs, sealing the spaces between floors and walls and replacing missing tiles;
- Reduce air leakage by using gaskets to fit floor tiles more securely onto floor frames;
- Ensure that vented floor tiles are properly situated to reduce hot spots and wash cool air into equipment air intakes;
- Manage heat sources directly by situating small fans near the heat source of equipment;
- Use time-of-day lighting controls or motion sensors to dim the lights when no one is in the data centre; lights use electricity and generate added heat, which requires added cooling;
- Reduce overall data centre lighting requirements by using small, portable lights within each cabinet, which puts light where technicians need it;
- Turn off servers that are not in use.
There are also many avenues for improving data airflow that require more planning and execution. The most documented and discussed is the hot aisle/cold aisle configuration for cabinets.
This design for the raised floor area effectively manages airflow and temperature by keeping hot aisles hot and the cold aisles cold. Aisles designated for cold air situate servers and other equipment in cabinets so that air inlet ports face the cold aisle. Similarly, hot air equipment outlets are situated in cabinets facing only into the hot aisle. Cool air for the data centre is only pushed through perforated floor tiles into cold aisles; hot air from equipment exhausts into the hot aisle.
Designing hot aisle/cold aisle presents its own set of challenges, including:
- Ensuring that cool air supply flow is adequate for the space;
- Sizing aisle widths for proper airflow;
- Positioning equipment so hot air does not re-circulate back into equipment cool air inlets;
- Adding or removing perforated floor tiles to match the air inlet requirements of servers and other active equipment;
- Accounting for aisle ends, ceiling height and above cabinet blockages in airflow calculations.
Another ready means to improve cooling is removing blockages under the raised floor. The basic cable management technique of establishing clearly defined cable routing paths with raceway or cable trays under the floor keeps cables organised, using less space and avoiding the tangled mess of cables that can restrict airflow. Moving optical fibre cables into an overhead raceway as well as removing abandoned cable and other unnecessary objects from below the floor also improves airflow.
Dust and dirt are enemies of the data centre. Dust has a way of clogging equipment air inlets and clinging to the inside of active equipment. All of this dust requires more airflow and more cooling dollars in the data centre. There is probably an active program for cleaning above the raised floor. It is just as important to periodically clean below the raised floor to reduce dust and dirt in the air.
There are many other initiatives that can be implemented to improve airflow throughout the data centre and reduce energy costs. These include:
- Move air conditioning units closer to heat sources;
- During cooler months and in the cool of the evening time, use fresh air instead of re-circulated air;
- Reduce hot spots by installing blanking panels to increase CRAC air return temperature;
- Consider using ducted returns.
According to power supply manufacturer APC, implementing the hot aisle/cold aisle configuration can reduce electrical power consumption by 5-12%. This same study showed that even simple measures, such as proper location of perforated floor tiles, can reduce power consumption by as much as 6%. By implementing even the smallest measures, power consumption can be drastically reduced.
Clearly defined cable routing paths keep cables organised, using less space and avoiding the tangled mess of cables that can restrict airflow. Moving optical fibre cables into overhead raceways, opens up airflow underneath floor panels.
Green Star certification and the data centre
The Green Building Council of Australia has developed a rating system - Green Star - a comprehensive, national, voluntary environmental rating scheme that evaluates the environmental design and achievements of buildings. Green Star was developed for the property industry in order to:
- Establish a common language;
- Set a standard of measurement for green buildings;
- Promote integrated, whole-building design;
- Recognise environmental leadership;
- Identify building life-cycle impacts; and
- Raise awareness of green building benefits.
Green Star covers a number of categories that assess the environmental impact that is a direct consequence of a project’s site selection, design, construction and maintenance. The nine categories included within all Green Star rating tools are:
- Management;
- Indoor Environment Quality;
- Energy;
- Transport;
- Water;
- Materials;
- Land Use and Ecology;
- Emissions;
- Innovation.
These categories are divided into credits, each of which addresses an initiative that improves, or has the potential to improve, environmental performance. Points are awarded in each credit for actions that demonstrate that the project has met the overall objectives of Green Star. Once all claimed credits in each category are assessed, a percentage score is calculated and Green Star environmental weighting factors are then applied. Green Star environmental weighting factors vary across states and territories to reflect diverse environmental concerns across Australia.
The following Green Star certified ratings are available:
- 4 Star Green Star Certified Rating (score 45-59) signifies Best Practice
- 5 Star Green Star Certified Rating (score 60-74) signifies Australian Excellence
- 6 Star Green Star Certified Rating (score 75-100) signifies World Leadership
While there is no direct correlation between the Green Star rating and passive network infrastructure solutions, certain product choices can prove critical to overall project certification.
Proper cable management, for example, is a highly effective way to decrease energy consumption in the data centre. Many of ADC Krone’s products are designed for minimum impact on the environment. Products such as Glide, RiserGuide and FibreGuide - to name just a few - can help improve passive airflow, thereby improving overall energy efficiency, one of the key elements in Green Star certification.
According to IBM, infrastructure upgrades can result in cooling cost savings of 15-40%. Data centre engineers and designers can leverage the product and design expertise of cabling infrastructure manufacturers, such as ADC Krone, to reap these types of immediate benefits, allowing them to focus on other areas critical to cost and energy savings, as well as pending site certification. Conclusion
There are many ways to reduce cooling requirements for data centres. Improving cable management, stopping air leakages, removing cable dams under the floor and choosing smaller diameter cable to improve airflow are just a handful of the measures available to data centre planners and managers.
ADC Krone supports green data centres by designing and manufacturing solutions for passive airflow improvement that reduce cooling and power requirements. A well-planned infrastructure will make a tremendous difference for the environment.
Powering data centres in the age of AI
As data centres are increasingly relied upon to support power-hungry AI services and...
Smart cities, built from scratch
With their reliance on interconnected systems and sustainable technologies, smart cities present...
Smart homes, cities and industry: Wi-Fi HaLow moves into the real world
Wi-Fi HaLow's reported advantages include extended ranges and battery life, minimised...