EU-funded researchers are developing ways to reduce data centres' environmental impact.
Data centres worldwide – many of them providing cloud storage and services – produce around half the volume of emissions of the global aviation industry and more than the total emissions of the Netherlands.
With tens of thousands of processors and storage units that need to be kept cool, about the same amount of energy is used for cooling data-centre equipment as powering it, making the infrastructure behind cloud services ripe for improving energy efficiency.
“For every kilowatt of energy consumed by the data centre, almost another kilowatt is dissipated as heat,” says Dr Massimo Bertoncini at Engineering Ingegneria Informatica in Italy.
“With ever-larger data centres being built around the world to meet rising demand for digital services, powering and cooling them is an increasingly significant environmental issue.”
Improving data-centre energy efficiency will not only help the environment, but it also makes financial sense for data-centre operators as running a large data centre can cost more than €10m a year just for the electricity.
“We've received a lot of interest from industry,” say Bertoncini. “First we are implementing this solution at our data centres and then will start to offer it to clients.”
Bertoncini coordinated a team of researchers who spent 30 months tackling the challenge under the “Green active management of energy in IT service centres” ( GAMES) project supported by €3m in funding from the European Commission.
Their work on energy efficiency has helped cut energy consumption at the data centres where it has been implemented so far by more than 20 per cent and is about to be applied commercially.
The GAMES consortium focussed on reducing the energy consumption of the IT infrastructure, taking the view that any improvement in energy efficiency at IT infrastructure level will automatically reduce energy consumption at the cooling and facility subsystem level by the same amount.
“For data centres to become more efficient, it is essential to know how energy is being consumed. Our focus was therefore to develop effective monitoring solutions that allow data centre performance and processes to be adapted in real time,” says Bertoncini.
The key to their approach was to investigate and deploy technologies and methodologies to measure the energy consumption of IT infrastructure in a more detailed way than previously possible all the way down to server level.
Their solution is based on a mixed approach, combining real-time sensing and measurement with intelligent processing for inferring predictive energy consumption models.
The approach takes into account the trade-off between energy-efficiency optimisation and the needs of business – such as Service Level Agreements (SLAs) and Quality of Service (QoS) guarantees.
Data-centre energy use is measured in units of Power-Usage Effectiveness (PUE) which is the ratio of the total power used by the facility, divided by the power delivered to its IT equipment. An ideal PUE would be 1, while the average is about 1.83 to 1.92.
The GAMES team deployed and tested their energy monitoring and real-time adaptation technology at two large and already relatively energy-efficient data centres, located at Pont Saint Martin in Italy and Stuttgart in Germany, representing two very different types of data centre.
At Engineering's Pont Saint Martin site, used mostly for legacy application hosting services, the technology was able to improve PUE from 1.35 to 1.25 – a considerable energy saving.
At the Stuttgart site, a high performance computing centre operated by the University of Stuttgart, the GAMES system resulted in similar improvements despite the different technology and applications of the centre.
“We showed that this approach works across technologies and at different data centres designed for performing different tasks,” says Bertoncini. “It enables data centre operators to determine the best practices at each site to reduce power consumption without impacting performance.'”
At one site it may make sense to lower the frequency of the running processors, while at another the optimal approach might be to transfer computation load from one server to another one and run all servers at 80 per cent of their capacity, rather than running fewer at 100 per cent capacity, says Bertoncini
Similarly, with adaptive technology, underused servers can be dynamically powered down when necessary, he added.
“There is always a trade-off between energy efficiency and performance. Essentially, the more performance required, the more energy will be used. The key is finding the right balance to provide the best service at the lowest energy cost,” says Bertoncini.
Another key outcome of the project was the study and categorising of families of applications exhibiting common energy-consumption behaviour patterns.
This categorisation, which the team have made publicly available, enabled the team to associate a set of best practices and optimised hardware and software adaptation actions in order to achieve the best possible trade-off among SLAs, performance and energy consumption.