HP has stirred up strong feelings by proposing that IT budget models should change in line with technological developments; E&T reports.
In the relatively brief span of its existence, IT has proven itself to be a cyclical business: the trend towards the re-centralisation of the 1980s and 1970s raises organisational issues some of whose solutions may be found by studying the lessons of IT history. Parallel to the trend towards server and storage virtualisation using large processing units, there is a drive toward utility computing, which itself harks back to time-sharing services of the central systems of yore. The utility computing is most prevalent in smaller enterprises: its implementation is urged by the recognition that IT is not a core competence in most cases, and by a corresponding desire to avoid the logistical - and financial - complexities wrought by widescale virtualisation.
This requirement has not been overlooked by the major vendors of virtualisation technology, and of utility services, with HP (Hewlett-Packard) particularly having made it almost a cause celebre to apprise its customers to the potential benefits of updating their whole approach to IT financing to exploit the growing opportunities for flexible procurement and so-called ‘pay-as-you-go computing’.
Sweating IT assets
Introducing new procurement models on the top of technological innovation does not necessarily represent a radically new proposition for seasoned IT vendors - it has proved to be an effective way of overcoming customer resistance to adopting latest-generation offerings, where customers claim to be contently ‘sweating’ their existing technology assets; but in a tough economic market where the way companies’ procure is subject to more stringent scrutiny in the name of regulatory compliance, vendors are having to invest almost as much R&D into the way they sell IT as to IT itself.
“Most companies are running on yearly budget allocation taken from the revenue of the business unit rather than doing cost allocation based on each unit’s usage,” explains Jean-Marc Chevrot, HP’s ‘CTO for adaptive infrastructure as a service, enterprise services’. According to Chevrot: “Companies need to align their budget planning based on consumption of IT rather than the annual revenues.”
HP is really alluding to highlighting two related problems here. First, traditionally, IT budgets have long been set annually - or often for longer - on the basis of estimated consumption of resources - in particular server capacity, storage, and network bandwidth. Second, the trend towards distributed computing that prevailed through the 1990s and the first half of the 2000s has meant that each business unit or department has been allocated its own physical resources dedicated largely just to their needs.
Price = Performance?
This model is largely perceived to have worked reasonably well, but HP now contends that it must change, with IT being allocated coherently across the whole enterprise with decisions taken by the CIO in conjunction with other directors. Only then, HP claims, can the full statistical power of virtualisation to make optimal use of resources be harnessed, slashing the over-provisioning that has been occurred in the past at a departmental level to ensure that peak demand can be met not just immediately but until the next allocation of resources, which might be a year (or more) away.
From a purely technological perspective, this notion has force. Most IT managers will have experience of satiations where certain departments or applications within an enterprise have, for whatever reason, argued a case for ‘special needs’: such needs usually come down to a demand for extra redundant capacity in terms of processing, memory and storage provision, and/or for segregation from other applications/systems which, it is deemed, may interfere with their smooth running. The message in colloquial terms is: virtualisation? Absolutely - but not in my server space.
In keeping with this strategic contention, HP has beefed-up its virtualisation product portfolio with the introduction of its Converged Infrastructure strategy and associated products at the end of last year (2009). Its avowed objective is to consolidate servers, storage, and network connections within a common fabric so that all of them - bandwidth included - can be allocated on-the-fly (i.e., provisioned only as and when needed) - a kind of ‘internal Cloud’, if you like.
Obviously the Converged Infrastructure reflects HP’s own vested interests; but it is not only for this reason that its precepts will be held to question, or criticised on more objective grounds. Many enterprises just do not altogether agree with HP’s assessment of the need for a new approach to IT provisioning, with some suggesting it is running ahead of the realpolitiks of IT management and deployment.
“We know, looking with analyst Gartner across all sectors, that - typically - only 20 per cent of the enterprise IT estate is virtualised,” says John Serle, past president of SOCITM (Society of Information Technology Management) the professional association for ICT managers in the public sector, and editor of its annual ‘IT Trends’ publication.
Serle attributes the relatively slow adoption of virtualisation in part to the fact that enterprises want to recoup their investment in existing systems before replacing them with newer models incorporating virtualisation, whether at the server or storage level. “Virtualisation is not a ‘silver bullet’,” Serle declares. “It does not run all your applications, and is not so cheap that you can afford to throw away all that you have got.”
Serle agrees, however, that enterprises would move increasingly towards virtualisation for those applications that are retained to be managed on in-house systems. There is also evidence that smaller enterprises are moving faster towards virtualisation, both at the server and desktop levels, and changing their approach to procurement in line with that, according to Dana Loof, executive vice president for global marketing at Pano Logic, a provider of desktop virtualisation solutions based on the VMware software.
“Mid-sized organisations are more likely to exploit the benefits of ‘zero client computing’,” argues Loof. “They are much more willing to drastically change their computing models.” Pano favours ‘zero clients’, which sounds somewhat like the dumb terminals of yore, lacking any storage or processing resource, dumber event than thin client terminals, with all software executed centrally (see ‘Thin clients’ fat challenge’, E&T, Vol 4 #21).
This all points to a need for a cultural change, centralising the management as well as the deployment of IT, according to Steve Palmer, CIO at innovative UK local authority the London Borough of Hillingdon, and current SOCITM president. “The biggest challenges for me are the cultural changes of putting ICT at the heart of the business, aligning it with the business, and making sure that it is truly enabling,” he says.
A lot of this comes down to having a strong CIO driving change, Palmer believes.
There is then a need for a reappraisal of how resources such as servers and storage are procured and deployed right across the IT management hierarchy, according to Clive Longbottom, service director for business process analysis at UK IT research and analysis firm Quocirca. He reckons that although enterprises are capable of accepting the consequences of technological renewal and redirection, advancing thought processes often prove more of a challenge.
“In many ways, it is mainly a change of mindset that we need,” Longbottom avers.
“With extant IT projects, to give an example, everyone sits down, does the request for information - RFI - and the functional outlines, and so on, and then procures enough resources to meet the needs of the project; but as soon as virtualisation comes in to it, we move away from a ‘one server per application’ mentality.”
Instead, Longbottom believes, “it becomes a matter of planning for base load, typical load, and peak load, and then ensuring that typical load is accounted for as the basis for resources available, and that base load and peak load are allowed for through ‘trading’ virtual resources in the data centre”.
Businesses will then be able to improve their utilisation, although Longbottom believes it is unrealistic for smaller enterprises to achieve the 80 per cent rates typically achieved on IBM mainframe platforms where the broad principles of virtualisation has long been established.
Conversely, most Microsoft Windows-based data centres, where each server is dedicated to just one application, are running at below 15 per cent, so there is still plenty of scope for improvement there.
A similar change in mindset is required for the adoption of utility computing, although here in particular we have been there before, as SOCITM’s John Serle pointed out. The problem is that this time round planning consumption of IT as a utility could be even harder because of the volatility of Web-based applications.
“Utility computing is a reinvention of old bureau-type services,” says Serle. At that time back in the 1970s businesses were often poor at predicting demand for services, with costs running out of control.
Postulating IT payout is a perilous pastime; attempting to actively influence its future shape is possibly even more pitfall prone. Yet it seems probable that fresh thinking toward such issues will encourage more procurement of IT as a service, and a growing market in the 2010s for the ‘software boutique’, whereby a variety of applications become available ‘on tap’ as a service. This will transfer the problem of virtualisation to specialist providers who understand how to balance high use with need to smooth over those pesky peaks in demand.