
Computer servers’ climate impact lessened by elegant algorithm
Image credit: Dreamstime
An elegant new algorithm developed by Danish researchers could significantly reduce the resource consumption of the world's computer servers.
One of the flipsides of the world's burgeoning internet usage is its impact on climate due to the massive amount of electricity consumed by computer servers. Studies have demonstrated that global data centres consume more than 400 terawatt-hours of electricity annually. This accounts for approximately 2 per cent of the world's total greenhouse gas emissions and currently equals all emissions from global air traffic.
According to the Danish Council on Climate Change, a single large data centre consumes the equivalent of 4 per cent of Denmark's total electricity consumption. Furthermore, data centre electricity consumption is expected to double by 2025, resulting in increased emissions. Clearly, the green transition in IT is an urgent matter.
Professor Mikkel Thorup and two fellow researchers from the University of Copenhagen's Department of Computer Science have now perfected an algorithm that addresses a fundamental problem in computer systems – the fact that some servers become overloaded while other servers have capacity left – many times faster than today.
"We have found an algorithm that removes one of the major causes of overloaded servers once and for all. Our initial algorithm was a huge improvement over the way industry had been doing things, but this version is many times better and reduces resource usage to the greatest extent possible. Furthermore, it is free to use for all," said Thorup, who developed the algorithm alongside department colleagues Anders Aamand and Jakob Bæk Tejs Knudsen.
Previously, Thorup had been part of a group of researchers behind a similar algorithm that addressed aspects of the server load problem by producing a groundbreaking 'recipe' to streamline computer server workflows. This saved much energy and resources, with tech giants including Google and online video platform Vimeo implementing the algorithm in their systems. Vimeo reported that the algorithm had reduced their bandwidth usage by a factor of eight.
The new, more refined algorithm addresses the problem of servers becoming overloaded as they receive more requests from clients than they have the capacity to handle. This happens as users pile in to watch a certain Vimeo video or Netflix film. As a result, systems often need to shift clients around many times to achieve a balanced distribution among servers.
The mathematical calculation required to achieve this balancing act is extraordinarily difficult, as up to a billion servers can be involved in the system. It is also ever-volatile, as new clients and servers join and leave. This leads to congestion and server breakdowns, as well as resource consumption that influences the overall climate impact.
"As internet traffic soars explosively, the problem will continue to grow. Therefore, we need a scalable solution that doesn't depend on the number of servers involved. Our algorithm provides exactly such a solution," Thorup explained.
According to US IT firm Cisco, internet traffic is projected to triple between 2017 and 2022. Next year, online videos are projected to make up 82 per cent of all internet traffic.
The new algorithm ensures that clients are distributed as evenly as possible among servers, by moving them around as little as possible and by retrieving content as locally as possible.
For example, to ensure that client distribution among servers balances so that no server is more than 10 per cent more burdened than others, the old algorithm could deal with an update by moving a client one hundred times. The new algorithm reduces this to 10 moves, even when there are billions of clients and servers in the system. Mathematically stated: if the balance is to be kept within a factor of 1+1/X, the improvement in the number of moves from X2 to X is generally impossible to improve upon.
In the abstract of the research paper – available free online – the scientists describe the quandary of dynamic load balancing as the need to "distribute balls into bins in an environment where both balls and bins can be added and removed".
Extrapolating further, the abstract continues: "We want to minimise the maximum load of any bin, but we also want to minimise the number of balls and bins that are affected when adding or removing a ball or a bin. We want a hashing-style solution where we, given the ID of a ball, can find its bin efficiently."
The researchers expect major IT companies to deploy their new algorithm immediately, given that many of them have already enthusiastically implemented the previous algorithm.
A free version of the researchers' article – 'Load Balancing with Dynamic Set of Balls and Bins' – is available to read online.
A 2016 post on the Vimeo Engineering Blog describes the implementation of Thorup's original algorithm.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.