The 'big iron' is back

Mainframes have been around for 60 years, yet the latest generation still offers advantages over other high-end computer platforms for many applications. So how does the mainframe manage to survive in a computing world increasingly defined by ultra-portable laptops and high-density blade servers?

The engineers who built early computers like ENIAC and UNIVAC couldn't have known that their intellectual progeny would have offspring of its own: they may have been amazed to know that its most recent 'spiritual' descendant - the IBM z10 - would have been born some 60 years later. Yet the line between the first general-purpose electronic computer and 2008's mainframes is clear.

ENIAC was a monolith. Its sea of vacuum tubes hummed busily, drawing on 150kW of power, as it spat out artillery trajectories for the US army. But it was a seminal machine. Its builders went on to found their own company, which was purchased by Remington Rand, and rebranded UNIVAC. Remington was purchased by Sperry, which then merged with Burroughs to become Unisys.

That company is still selling mainframes today; but in a world of high-performance PC servers and clustered computing, what role is there for the mainframe? Come to that, what now constitutes a mainframe, anyway?

Given the surge in mid-range computing power over the past decade, the question is most apposite. Symmetric multi-processing machines have been on the market for years, but haven't been accorded the mainframe moniker

"There is no regular definition, but mainframes are definitely something about large, shared-everything symmetric multiprocessors," opines Doug Neilson, senior consultant in IBM's eServer group. More importantly than that, a mainframe is something that he'd consider running a customer's entire application base on; but then, he works for IBM.

Yet organisations can stake their entire application base on a single platform because mainframes are, above all, built to be reliable. Redundant architectures are a mainstay of the mainframe, which is built so that if one component dies, another keeps doing its job.

Lockstepping - in which each instruction is executed twice across different processors, results from which are constantly polled to ensure that the most reliable one is used - is common practice. Mainframe developers and designers leave little to chance.

Mainframes have always maintained a position as the workhorses of the back office computing world. Neilson's 'shared everything' phrase is no empty slogan. Since the early days, mainframes have been monolithic machines. Later ones sported numerous high-end processors, but they all share the same memory space, essentially making them one big computer, rather than lots of little ones linked together.

The predilection in the 1990s and early 2000s for connecting together lots of inexpensive Intel processors stood in stark contrast to this, and led to a dichotomy characterised as 'build up' versus 'build out'.

Build-out architectures encompass clustered servers, connected via high-speed switched links, and blade computing (one or more processors on a single board, connected to many other boards via a single high-speed backplane). Unlike mainframes and supercomputers, such systems generally do not share memory. Many successful systems have been created on the build-out concept, including vast, CPU-straining environments such as Linden Lab's virtual world, Second Life.

IBM sells both species of computing platform, and Neilson is quick to point out the benefits of each. "There are some workflow types that are very suited to blades that are not at all suited to mainframes," he says. "Particularly numerically-intensive computing - number crunching. Also a lot of Web serving, email, and infra-structural applications are very appropriate for shared nothing blade environments."

Mainframes, on the other hand, are particularly good at large, real-time online trans-action processing applications and database processing. Running a network of cash machines? A large, multi-national collection of point of sale terminals? A mainframe may well be the natural back-end choice.

"Over the last few years it's been pretty healthy," says Barrie Heptonstall, technical sales and services director at IBM UK, of its 40-year-old mainframe business. "A lot has been financial services, and that has been a very successful industry in the last few years. There has been quite a lot of consolidation in that market. Because they all run on mainframes, that has generated a lot of business for IBM."

New tricks

This hasn't stopped build-out platforms overshadowing the mainframe as a mixture of lower prices, higher-speed networking technologies and distributed software propelled them into the foreground. 

But do not imagine that the mainframe is becoming outmoded, warns John McKenny, VP of marketing for the mainframe service management business at BMC, which sells software to manage mainframe and other data centre environments; and it is the data centre - that hotbed of computing focus - that is proving an influential factor in the mainframe's longevity.

"The mainframe hasn't gone away: over the last ten years there has been a tremendous amount of investigation around moving off the mainframe platform," he says. "But we see the applications and companies remaining on the mainframe staying very bullish about the platform."

Revenues for IBM's system z series (which has the majority share of the mainframe market) grew for ten consecutive quarters until the second half of last year, when sales dipped. Consequently, after a 7.8 per cent revenue growth in 2005-6, the company saw mainframe revenues fall 11.2 per cent.

However, the company points out that this is a technology cycle issue - the firm's last mainframe release, called the z9, was unveiled in September 2005, replacing its previous high-end model, the z990. But in February 2008, IBM unveiled the z10, its latest behemoth. The z10 houses 64 processors, which IBM reckons is equivalent to 1,500 Intel servers. Roughly twice as fast as its predecessor (and with 70 per cent more capacity according to its maker), it is hoped that this will continue to sustain what has thus far been a vibrant business for 'Big Blue', as IBM has long been nicknamed.

Power efficiency 

Another selling point for mainframes is an environmental one, claims Neilson. Power constraints have been pressuring data centre managers for some time now.

One US data centre manager that E&T spoke to said that getting enough power from the local substation to feed all of the equipment in the building was forcing its hand, causing it to consider another data centre, miles away, to complement the first. Data centres are expensive undertakings, and such investments incur enormous cost. If the applications and computing requirements support it, mainframes can provide a more efficient alternative, Neilson suggests.

One of the big drawbacks of build-out architectures was the additional hardware needed, and the floor space that it would often take up. Centralised architectures can solve that problem. "This big hot dinosaur that was the mainframe turns out to be very power efficient," he says. "We're finding that when you install a mainframe it will typically offer an 85-90 per cent energy saving, and the same saving in floor space." The company is marketing such gains relentlessly as part of its broader Project Big Green programme, launched in November 2007.

Nevertheless, with finance being a big market for IBM and its competitors, it may need to find more revenues from elsewhere in a post-credit crunch environment. It is unlikely that we have seen the last consolidation among investment firms hit by financial pressures, and customer consolidation is good for IBM's businesses.

Belt-tightening in that sector is likely to be considerably tighter in a recessionary economy; so are there new users or new applications lurking in the blue yonder?

"The really critical measure for us is not so much how established customers are doing with their legacy applications," insists Heptonstall. "The real question is what are those customers doing with new applications, and what about smaller customers that you might not think would traditionally have been concerned with mainframes in their data centres? How do we make the product more attractive for customers to do things with it?"

Those are three important questions for all mainframe vendors in 2008.

Mainframe management skills

Emerging markets are a growth area for new customers says BMC's John McKenny. "A lot of that is related to some of the outsourcing you see in emerging markets," he explains. "We are seeing strong growth in China. The large banks there are very bullish. And the Indian market is starting to pick-up. They definitely have a focus on outsourcing. We're seeing some in the former eastern bloc [countries] as well."

Such emerging markets may also be a useful base of skills for mainframe installation and administration: what's the betting that most university students are focusing on .Net or Java development, rather than burning the midnight oil to better understand the vagaries of z/OS administration or, heaven forbid, COBOL application migration and maintenance?

"One of the dark clouds in Europe and North America is the fact that the workforce for mainframes is aging, and there haven't been many people trained to manage mainframes," admits Marcel Hartog, EMEA mainframe director for mainframe database software vendor CA. "In Eastern Europe, AsiaPac, and China, they still train people in that - and those people are cheaper."

When it comes to new applications, McKenny sees most customers moving to the z series because they want to scale-up an existing SAP implementation. But service-oriented architectures (SOA) could play a part in such developments, while also enabling companies to expose their existing legacy applications in more agile ways. SOA (which involves using XML interfaces to expose application components as business services) has become increasingly linked to mainframe use, explains McKenny, but it is far from an easy process.

"It can be a difficult job - it's really about identifying and defining the key business services that a company wants to monitor closely," he believes. "The method of doing that is to map the applications to the underlying IT infrastructure components that support them."

Sexing-up mainframe software

IBM is working in other ways to make the mainframe a more attractive platform for applications developers. In the run up to Y2K, the world was scrambling to fix its legacy COBOL applications on OS/390 and other platforms; but IBM was also tinkering with running Linux on the z series. It made the project a formal part of its mainframe strategy at the turn of the decade, a year before piling $1bn into Linux development across the board, and cementing the operating system's strategic position within the product portfolio.

Now, the company is fleshing out its range of offerings. In addition to Linux, it has been porting more of its WebSphere Java server suite to the mainframe, and it has also been collaborating on a port of Sun's OpenSolaris (which became an Open Source system in 2005) for the z series.

Like Linux, OpenSolaris does not run with the same privileges as IBM's own z/OS (System/390 as was). Instead, it runs within its virtualised z/VM environment; z/VM is its equivalent of the hypervisor - a software abstraction layer that sits underneath the operating system.

"z/VM has been around for 40 years, but like z/OS it has evolved over time," says a Neilson indignant over the marketing successes of upstart young virtualisation vendors such as VMware, which captured the market's imagination - and much of the mindshare around virtualisation concepts - when it floated successfully last year (2007). "We've been doing virtualisation since 1964," he reminds (the company spent $5bn launching into the mainframe market that year).

The application of utility computing to mainframe systems can also have a significant impact on price, argues Bob Tillotson, head of worldwide sales for Unisys' Clearpath division. The company provides a metered pricing scheme for mainframe use, meaning that customers can provision the necessary computing power without paying for it until it kicks in at peak demand periods. "The hardware is there - and you can run it at peak if you want, but you don't have to," he says. 

"The metering approach means that you're only paying for the MIPS that you're actually using."

Such innovative provisioning can also be found at IBM (they also apply to non-mainframe platforms) are nevertheless a sign that there's life in the old mainframe yet.

The humming of vacuum tubes has long since been replaced by the gentle chuntering of hard drives. The target market - large, affluent companies with complex, high-volume processing needs - is also similar.

Twenty-something years after it became fashionable to predict its demise, it's still here, the clinging on relentlessly, and finding fresh reasons not to be retired.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close