Engineers boost cloud computing efficiency
Approach can improve performance by up to 20 per cent
Computer scientists at the University of California, San Diego, and Google have found a approach that allows the massive infrastructure powering cloud computing up to as much as 15 to 20 per cent more efficiently.
Computer scientists looked at a range of Google Web services, including Gmail and search, using a unique approach to develop their model.
Their first step was to gather live data from Google's warehouse-scale computers as they were running in real time.
Their second step was to conduct experiments with data in a controlled environment on an isolated server.
The two-step approach was key, said Lingjia Tang and Jason Mars, faculty members in the Department of Computer Science and Engineering at the Jacobs School of Engineering at UC San Diego.
"These problems can seem easy to solve when looking at just one server," said Mars. "But solutions do not scale up when you're looking at hundreds of thousands of servers."
The work is one example of the research Mars and Tang are pursuing at the Clarity (Cross-Layer Architecture and Runtimes) Lab at the Jacobs School, their newly formed research group.
"If we can bridge the current gap between hardware designs and the software stack and access this huge potential, it could improve the efficiency of web service companies and significantly reduce the energy footprint of these massive-scale data centres," Tang said.
Researchers sampled 65K of data every day over a three-month span on one of Google's clusters of servers, which was running Gmail.
When they analysed that data, they found that the application was running significantly better when it accessed data located nearby on the server, rather than in remote locations.
They also knew that the data they gathered was noisy because of other processes and applications running on the servers at the same time, so used statistical tools to cut through the noise. However, more experiments were needed.
Computer scientists then went on to test their findings on one isolated server, where they could control the conditions in which the applications were running.
During those experiments, they found that data location was important, but that competition for shared resources within a server, especially caches, also played a role.
"Where your data is versus where your apps are matters a lot," Mars said. "But it's not the only factor."
Servers are equipped with multiple processors, which in turn can have multiple cores.
Random-access memory is assigned to each processor, allowing data to be accessed quickly regardless of where it is stored.
However, if an application running on a certain core is trying to access data from another core, the application is going to run more slowly, and this is where the researchers' model comes in.
"It's an issue of distance between execution and data," Tang said.
Based on these results, computer scientists developed a novel metric, called the NUMA score, that can determine how well random-access memory is allocated in warehouse-scale computers.
Optimising the NUMA score can lead to 15 to 20 per cent improvements in efficiency.
Improvements in the use of shared resources could yield even bigger gains – a line of research Mars and Tang are pursuing in other work.
"What the Scottish independence referenda could mean for engineers and engineering on both sides of the border"
- What to Specialise in Electronics Engineering?? [03:02 am 03/04/14]
- Britain to have just one remaining coal pit by the end of 2015 [01:11 am 03/04/14]
- LV Generator Star point earthing - UK [08:35 pm 02/04/14]
- East West Rail - the Oxford to Bedford route [07:33 pm 02/04/14]
- Small nuclear power [06:06 pm 02/04/14]
The essential source of engineering products and suppliers.
Tune into our latest podcast