Cutting back power consumption
Just as the industrial revolution did before it, the information age has transformed our world and the way we operate within it. It’s changed the way we access data and do business; the way we communicate, educate and consume information. It has given rise to entire industries and countless changes to employment and careers; jobs have been created and others rendered futile, often virtually overnight.
Communication methods and the equipment we relied on 20 years ago, such as faxes and dial-up modems, now seem as quaint as the quill and inkwell. The advent of mobile computing, the prevalence of smartphones and a move to cloud computing in more recent times have multiplied the effect and we now take for granted that digital files are permanent and can be retrieved at any time, from any location, in a matter of seconds.
For data centre owners and operators, the challenge is to cope with the constantly changing face of the industry. Not only must they factor in the impact of continual shifts in customer expectation and practices to adequately manage the mounting power consumption from this demand, but also ‘crystal ball’ into the future to ensure the projected life expectancy of the data centre (just under 20 years) is delivered and it meets commercial targets.
There are three principle considerations for the development and ongoing operations of a data centre, the nebulous nature of which make future-looking decisions all the more difficult.
Space
While the evolution of technology continues to shrink the physical size of hardware and we live in the era of virtual servers, there’s no doubt the landscape has changed considerably since the 1980s, when a 1 GB hard drive was the size of a juke box. However, the sheer volume of data requiring storage ensures that space still dictates the direction for design and operations.
It’s hard to imagine total global capacity, but in February 2011, the University of Southern California released research which calculated current worldwide data storage at 295 exabytes, or 295 billion gigabytes. And it keeps growing; in a study conducted by IT research company IDC in June of the same year, it was predicted that the world will generate 50 times current data production levels by 2020. It’s all got to be stored somewhere. It’s growing at such a rapid rate that in the not-too-distant future we’ll hit a level that we haven’t even derived a term for yet ... but that’s another story.
Power consumption
There’s no denying data centres are power-hungry beasts. Power to run the IT equipment itself, then power to run cooling and other environmental controls and ancillaries like lighting. Power consumption in a data centre is often measured using PUE, or power usage effectiveness. PUE is the ratio of total power for the facility, including cooling, lighting etc, divided by power utilised by the IT gear alone.
POWER USAGE EFFECTIVENESS = Overall facility power/IT equipment power.
Guidelines indicate an optimal PUE target of 1.0, meaning that almost all power usage is consumed by the IT hardware itself. Given the requirement for cooling and environmental controls to ensure that ambient conditions are the most favourable for the IT equipment, it’s not uncommon to find a PUE closer to 2.0. Not uncommon, but not ideal either.
Cooling and environmental controls
Continual reliable operation is paramount in a data centre as any downtime can spell disaster. Hardware is susceptible to overheating if adequate cooling and ventilation aren’t in place and even a few degrees can make the difference between business as usual and catastrophic failure. If the installation is fortunate enough to escape immediate failure, it can still suffer delayed malfunction as fragile electronic componentry can break down weeks after an overheating incident.
Factor in loss of business, hardware replacement and employee underutilisation during downtime and it’s easy to see that the costs soon add up and why operators are so keen to avoid it.
How the big guys do it
Some of the world’s bigger data centre operators including Google and Facebook, have been busy publishing information on their own centre energy-efficiency initiatives, temperature control and other cooling methods. Cynics might suggest that this transparency is a PR exercise, but if there’s lessons to be learned, why not take heed?
Google suggests that most data centres are probably running cooler than they actually need to. They cite the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) and IT equipment manufacturers as expert opinion and suggest a slight temperature increase will not only have no detrimental effect on equipment, but will deliver an immediate measurable energy saving.
The company also implements a design ethos comprising thermal modelling and airflow controls. The modelling identifies potential data centre ‘hotspots’, so that equipment can be physically laid out in a fashion that delivers even temperatures across the installation. Methods of airflow control that require no energy, such as plastic curtains and blanking panels, are utilised to ensure adequate segregation between hot and cool areas. Where possible, Google uses water for cooling, rather than chillers.
The Open Compute Project, which has made public Facebook’s so-called ‘secret data centre recipes’, is a bid to encourage data centre development that is more efficient from both a cost and power perspective. After 12 months redesigning their server specs, Facebook worked with manufacturers to achieve a 38% increase in efficiency and a product they maintain costs 24% less than the industry standard. They assert a PUE of 1.07 at the Prineville, Oregon data centre. Google claims between 1.06 and 1.12 across its centres, dependent on the interpretation of total facility power usage (it claims it uses a more stringent approach than others).
Design
So, while not every data centre is on a par with Google, Facebook or Amazon, lessons can be learned from the way the way the big guys address problems. The same basic design principles apply and the problems they are facing today are the problems of the future for smaller scale projects, particularly if you consider the projected lifespan of a data centre.
To assist with the design process, professional organisations such as ASHRAE make a wealth of information available to members including a comprehensive selection of publications specifically for the datacoms sector. These incorporate guides on best practice design for energy efficiency in data centres, power trends and cooling applications and real-time energy consumption measurements. See www.ashrae.org/bookstore for more.
Monitor, monitor, monitor
The importance of monitoring really can’t be overemphasised and, as the size and scope of data centres increases, visibility from a remote location is imperative. With a simple monitoring system in place, changes to environmental conditions that pose a threat to system operation are identified before the crisis unfolds, via a web browser from any location.
Power usage monitoring is also useful and can provide valuable design input for data centre upgrade projects in particular. Many solutions offer everything from continual data logging and report generation, which give a snapshot of the situation as it stands, through to enabling corrective measures.
Systems incorporating redundant power switching provide a reliable method of automatically switching equipment to a backup power source, ensuring critical network devices are always up and running.
There was once a time where the power draw of a data centre would be the least of a contractor’s concerns. As long as the install went according to plan, then job well done. These days, everyone on the project has an interest in keeping power costs down as the crossover between roles creates some blurring of lines of responsibility. At the very least, it makes sense to have an understanding of the factors that influence overall project power costs.
Powering data centres in the age of AI
As data centres are increasingly relied upon to support power-hungry AI services and...
Smart cities, built from scratch
With their reliance on interconnected systems and sustainable technologies, smart cities present...
Smart homes, cities and industry: Wi-Fi HaLow moves into the real world
Wi-Fi HaLow's reported advantages include extended ranges and battery life, minimised...