Apple quartered

Autonomic computing: the route to systematic well-being?

In 2001 an IBM manifesto called for computers that could regulate their own vital functions - just as the human body does. Nearly a decade on, are we any closer to autonomic computing? E&T finds out.

The human body has given computer scientists a lot of pointers toward the way that computers can be managed, and physiologists’ discoveries about the workings of the body continue to inform the way systems are made more reliable and efficient. The discipline that seeks to learn most from our bodies’ ‘operating systems’ is autonomic computing - a concept that marries human physiology to computer technology in such a way that even the most complex IT infrastructures are fully managable.

The impetus behind autonomic computing is that a computer network is liable to act in much the same way as the autonomic nervous system in the human body. This is the part of the human physiology that deals with involuntary human actions such as heartbeat, digestion, breathing, and urination, for instance. The human body requires no direct conscious interaction from the brain to handle these functions - they operate of their own accord. Wouldn’t it be useful to configure computers to manage themselves in a similar way?

The autonomic computing concept originated in a 2001 manifesto from IBM. The thinking was simple. With the growing complexity of computing systems, why not let the computers sort out things for themselves, just as the human nervous system makes automatic responses? As the manifesto itself points out: “The information technology boom can only explode for so long before it collapses on itself in a jumble of wires, buttons, and knobs. IBM knows that increased processor power, storage capacity and network connectivity must be answerable to some kind of systemic authority if we expect to take advantage of its potential. The human body’s self-regulating nervous system presents an excellent model for creating the next generation of computing.”

For its time, this was a far-reaching vision, and, despite its travails, IBM retains the capability to come up with highly influential propositions. However, many knowledgeable computing professionals could be forgiven for never having heard of autonomic computing in the nine years since. So how has the legacy delivered on the initial 2001 vision?

In some respects, it looks like the concept is still in the theoretical phase. The phrase ‘autonomic computing’ has not penetrated the IT lexicon in the same way as developments like Cloud Computing, service-oriented architecture (SOA), and virtualisation - although it does have a bearing on the latter, as we shall see.

However, Steve Furber, ICL professor of computer engineering in the School of Computer Science at the University of Manchester, insists that aspects of the technology have become intrinsic to the technology that now surrounds us.

“The Internet is a prime example of this,” says Furber. “It copes with disruption, and it re-routes traffic automatically. While this is not quite autonomic computing in the strictest sense, it does demonstrate how the concept is not as alien as it might seem.” Furber is working on a project (see box on p50) that aims to replicate aspects of the human brain functioning within computer networks.

Product deployment

While ‘autonomic computing’ may not have achieved the ubiquity of other new buzzwords, IBM itself has gone on to employ many autonomic techniques in its own enterprise products. According to Omer Rana, professor of performance engineering at Cardiff University, IBM’s thinking was to be welcomed. “That was really a vision to support these ideas in large-scale systems. No one single person can understand the complexity of such systems.”

And, while Rana accepts that the use of the phrase is beneficial to IBM for coining it, he thinks that it has been a helpful description: “It’s a reasonable term,” he allows. “We can all associate with autonomic nervous system.”

According to IBM’s vice president of autonomic computing Matt Ellis, meanwhile, the company’s VP of autonomic computing, this is not because the concept has disappeared from view, but because so much of the thinking has now appeared in IBM’s products.

“We’re now working at bringing in many of the concepts of autonomic computing to IBM products,” Ellis says. “We’ve embedded and integrated management, and are working closely with the IBM Smart Business team to bring forward ideas on how to implement dynamic infrastructure. This is one of the closest examples of the way that automotive computing has entered the mainstream. [The] client sets up the Smart Business cube and management of that cube is all automated.”

Ellis believes there is a psychological element to this, and this represents one area where computing functions differ from those of the human body. The human autonomic nervous system works precisely because the human body knows of no other way of operating; that is not the case with computer systems. There is a human manager who has to allow the networks to manage themselves, and there’s a huge level of trust there. As Ellis points out, the idea of automated updates does take some getting used to, although most managers are quite happy with the concept.

A crucial element of autonomic computing is not that it’s been designed to eliminate human contact though - it’s to handle concepts that are too complex for human managers to deal with - or, at least, deal with efficiently. Ellis says that this is what has happened with the autonomic computing elements of the DB2 relational database management system. One of the first products to use the technology was DB2 with its self-tuning memory manager back in 2002 - not long after the original manifesto. This enabled the database to respond to changes in workloads by adjusting memory and buffer pools to improve performance. Other areas where autonomic computing has been incorporated into IBM’s portfolio include predictive analytics. IBM showed its keen interest in this area when it bought data mining solutions company SPSS in 2009.

Predictive analytics

Ellis says that the IBM has introduced predictive analytics to the IBM Tivoli range to help deal with one of the biggest challenges now facing IT management - that of virtualisation. “The challenge is looking at the workload of virtual machines, and whether performance can be maintained. This is something that is easier with physical machines but in virtualisation there is a variety of different factors - issues such as congestion, resource memory, I/O, and storage to deal with,” he says.

This is where the elements of autonomic computing come into play; incorporated into the predictive analytics within Tivoli, where they play a part in capacity planning. How autonomic computing works in these virtualised environments is by delivering a capability that identifies trends within the system so that a failure can be identified before it causes an overall system dropout - factors such as a poorly-performing server and a memory failure can have an impact on performance.

Other software products are also using elements of autonomic computing. Omer Rana says that they are making their presence felt in the data centres - an area where self-management has begun to describe a technological requirement, particularly in areas like power management - a field that HP has developed further.

Data centre controls

Autonomic computing concepts have perhaps been applied most conspicuously in virtualised data centres. Rana points out a way in which the autonomic theory can work. “Consider a hypothetical data centre that serves a number of companies,” he says. “One of these companies needs to use seven servers, while another organisation uses three servers - the former pays £5 an hour per server, while the latter pays £7 an hour. Now suppose a third organisation wants to use this data centre, and is prepared to pay £100 an hour; am I able to manipulate the capacity to allow this third organisation?”

On the face of it, this appears to be a simple dilemma to resolve; any service provider tasked with maximising revenue will be happy to accept customers paying over the odds.

The problem facing data centre providers is that it will be tied to particular service-level agreements (SLAs) and the penalties it faces for non-delivery of services could negate the extra revenue it makes. But Rana says that this is the sort of calculation autonomic computing was designed to do.

“It all depends what the SLAs say - autonomics will help me answer the problem, and will work out whether migrating the new company’s workload would make revenues even if the SLAs are violated. Given that most data centres don’t operate at full capacity, autonomic computing techniques can be used to calculate a multi-priority optimisation problem.”

But, like Ellis, Rana sees a problem with removing humans from the loop entirely - there’s a psychological element involved too.  For example, one of the challenges facing data centre managers is the way to incorporate factors such as server density and power consumption; for Rana there are levels of complexity here that would be an ideal fit for autonomic computing.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close