IBM S360

Big Blue at 100: what IBM has done for us

On the eve of its 100th birthday, we look at how IBM's computing breakthroughs of 50 years ago are still reverberating in 2011.

This June IBM celebrates the 100th anniversary of its incorporation. It was founded in 1911 as the Computing Tabulating Recording Corporation – following a merger of the Computer Scale Company of America and the International Time Recording Company with the Tabulating Machine Company – and renamed as International Business Machines 23 years later.

Whatever one's view of IBM's overall influence, there's no denying the fact that a high-tech vendor marking its centenary is an unusual achievement. The occasion also reminds us of the estimable contributions to computer technology – and thereby to technology in general – that 'Big Blue' has made over the last century.

Stretch objectives

Ahead of the centennial, April 2011 sees the 50th anniversary of the company starting to pick over its expensive 'Stretch' computer project, launched and then withdrawn between 1960 and 1961. Stretch was designed to revolutionise computing. Its result, the IBM 7030 supercomputer, should have delivered 10MIPS – 100 times faster than its predecessor, the 704. In practice, it struggled to achieve 1MIPS and the 7030 was taken off the market.

In April 1961, IBM bosses in Poughkeepsie wanted to find out what went awry, and sent managers out to interview the Stretch project leaders. Concerned that the project's work would be buried – president Tom Watson Jr had told the Wall Street Journal that he was disappointed with Stretch, and announced a hefty discount for the 7030's few customers – senior engineer Harwood Kolsky wrote a series of memos arguing not for the computer but the technology it used.

The manoeuvres worked. Technologies developed for Stretch not only moved into the even more expensive, but highly successful, System/360 – launched on 7 April 1964 – but the ambitious Project X that led to the groundbreaking System/360 Model 91 that would be used for Nasa projects to simulate deep-space missions.

Together, these projects yielded a broad range of innovations that underpin modern computing – some so advanced that they did not fully reach the IT innovations mainstream until up to 30 years after their initial development.

So, among IBM's many groundbreaking projects, it seems appropriate to focus on the achievements of the 7030-System/360 initiatives. What has IBM ever done for us? Well, in computing terms, quite a lot...

Byte the hand that feeds it

First up is the eight-bit byte. Want to know why a byte should have eight bits and not two, four or six? It started on the System/360.

According to Werner Buchholz, who credited 'Mythical Man Month' author and IBM fellow Fred Brooks with the invention of the name 'byte', the idea of the byte dates back to the Stretch project. In those days, though, byte could mean any small package of bits: from one to eight. Although the System/360 took over many of the concepts developed for Stretch, the byte was fixed at eight bits for cost reasons.

Similarly, instead of offering addressing down to the level of individual bits, IBM's computers – and most that followed until the advent of reduced instruction-set computer (RISC) design – accessed main memory by the byte.

Incidentally, why 'byte'? Buchholz wrote, four years after the 1956 memo that used the term for the first time, 'The term is coined from 'bite', but re-spelled to avoid accidental mutation to 'bit''.

IBM did not invent the concept of virtual memory: that honour goes to the Manchester-based Atlas team; but the company did the most to promote its use. In 1969, IBM researcher David Sayre led a team to show that data paged automatically on and off disk was more efficiently managed by machine than by skilled programmers. Originally developed as a special order for the University of Michigan, IBM built the first commercial computer that implemented the concept of virtual memory – to support users on a time-sharing system.

The System/360 Model 65 machine's architecture used concepts developed by academics at MIT, using a hardware unit to implement the translation between virtual and real memory addresses – the forerunner of the modern memory management unit (MMU) found on all but low-end 32bit microprocessors.

The Model 67 made it possible to spool a user's memory out to disk, and so make it possible to run larger applications, albeit more slowly, than would ordinarily fit into the machine's RAM. The IBM architecture supported both paging and segmentation, an approach originally used by the Burroughs B5000 in 1961. The two would be combined two decades later on Intel's processors for the PC, although few developers choose to use their segmentation features nowadays.

Instruction boosting

Instruction pipelining is another neat technique held over from IBM's Stretch project. Instructions take a while to complete: they have to fetch data from memory, work on it, and then write the result to memory. You can speed things up by keeping a small scratchpad store, such as a register bank, on-chip; but you still have a lot of electrical delays built into the process.

Computer scientist John Cocke, who reused the idea when developing reduced instruction set computing some 20 years later, worked with Harwood Kolsky during Stretch development to hide the inevitable latency of shunting data around a computer. They named the technique 'pipelining'.

Each instruction is broken down into a set of simple operations that, usually, can be executed in less than one computer clock cycle. By having multiple instructions in flight at any one time, pipelining increases parallelism and performance.

According to Fred Brooks, Stretch had an 11-stage pipeline. The System/360 Model 91 surged past this with an unusually deep 20-stage pipeline, although many other machines had shallower pipelines as branches incur a large performance penalty on pipelines – something that Stretch tried to reduce through branch prediction.

Out-of-order execution

Like Stretch before it, the IBM System/360 Model 91 was an expensive and risky undertaking. Just 14 were made, and its architectural legacy would not become apparent until almost 30 years later when the race between Intel's x86 and the RISC architectures became intense.

Computer architects realised one of the snags with performance was that compilers did not have enough information to deliver instructions in the right order – they could not know when data would be available for a following instruction. Only the runtime environment could provide that information. Step forward out-of-order execution.

Working on the Model 91, Robert Tomasulo came up with an answer: use that runtime information to reschedule instructions and execute them out of order. The CDC 6600 could finish instructions in the wrong order, thanks to a technique called scoreboarding, so that a very long operation such as a floating-point divide would not hold up simpler operations.

Tomasulo's algorithm greatly extended the ability of a computer to re-sequence instructions and it became the template for microprocessor architectures such as IBM's POWER processors and the Intel Pentium Pro and AMD K5.

In those processors, out-of-order execution was used more extensively than in the Model 91, where it was restricted to the floating-point unit, much like scoreboarding in the CDC machine. It was not until the microprocessor wars that Tomasulo's approach was used widely: faster memory provided a cheaper speed boost for mainframes that followed the Model 91. Tomasulo received the Eckert-Mauchly Award a couple of years after out-of-order microprocessors were launched.

Speculative execution

Branches are problematic for any high-performance computer. They force the machine to empty its instruction pipeline so that it can load a new stream from the branch destination. Branches also prevent the computer from being able to reorder instructions optimally.

It's not possible to pull in instructions that the compiler has placed after the branch in case they are not meant to run; but computer architects have known for decades that branches that point backwards tend to be taken – these are often the instructions that sit at the end of loops.

Engineers working on the Stretch project decided to look at the possibility of getting the machine to run instructions speculatively by having it make predictions about branches. If it got the prediction wrong, it would back-up and run the instructions from the right branch. Thus was born speculative execution.

The recovery takes time but if you speculate correctly more often, the performance gains can be dramatic. There was a hitch with speculative execution: it was expensive to implement, and you needed to be right a lot more often than wrong, which was a problem for the simple Stretch prediction algorithm.

After IBM cancelled Stretch, the company put speculative execution on the back-burner until the development of the 3090 in the mid-1980s. All but low-end 32bit processors now have some form of branch prediction.

British computer scientist Maurice Wilkes wrote the first paper on cache memory in 1964, but the first commercial machine to put the idea into practice was IBM's System/360 Model 85 launched four years later. The deceptively simple idea of storing frequently-used memory in fast local memory outperformed much more exotic performance enhancements such as out-of-order execution.

As with virtual memory, researchers found a hardware-managed cache would easily provide more performance than simply giving programmers and compilers access to larger register files. Compilers that could make efficient use of large register files would not appear for at least another 10 years – and even then large register files suffer from a law of diminishing returns. As a result, the cache memory quickly became a standard piece of high-performance processor design, forming a key part of PC design from the late 1980s onward.

Distributed computing

The current fashion for server virtualisation – enabling users to run different operating systems side-by-side on the same computer – makes it easy to overlook the fact that the technology had its beginnings in the late 1960s, when IBM introduced virtual memory on the System/360 Model 67.

A low-level operating system called CP/CMS developed at IBM's research centre close to MIT in Massachusetts – it originally provided the 'C' in CMS – not only made it possible to time-share user applications, but any System/360 software environment.

In the early 1970s, when IBM launched its first System/370 machines that could support virtual memory, CP/CMS was re-engineered to become VM/370, the first mainstream hypervisor and virtualisation environment. It could host IBM's primary operating systems (such as OS/360), and even another copy of VM, which was how programmers tested new versions.

Intellegent channel I/O

From the late 1950s onward, when IBM introduced the IBM 709, mainframes were effectively distributed computer systems. The central processor did not have to do everything itself: a lot of jobs could be offloaded to dedicated processors sitting inside I/O (input/output) processors or in IBM parlance 'channels'. Control Data later used the more meaningful term of 'peripheral processors'.

One of the first applications for the 801 RISC processor devised by John Cocke was as a way of boosting the performance of channel processors in the System/370 line during the late 1970s. The architecture would then form the basis of the RS/6000 and AS/400 machines, as well as the CPU core of the 9370 'mini-mainframe'.

The intelligent channel processor concept lives on in many microprocessor-based systems as a direct-memory access (DMA) controller found in many I/O interfaces, such as SATA and Firewire.

Before it was forced into a potentially disastrous move by US government agencies who were concerned about IBM's dominance of the computing marketplace, the company made a momentous commercial decision in late 1968: to separate hardware and software in its deals. In a move that profoundly changed the makeup of the computer industry, 'unbundling' made it possible for a much wider range of companies to compete for large corporations' IT spend.

The move also would allow IBM to produce a much wider range of software, something that was not lost on managers, such as then marketing vice president Archie McGill. Burton Grad, who was a manager in IBM's data processing division, later recalled that customers initially responded poorly to the move: believing hardware price reductions did not compensate for an effective increase in software cost.

The ramifications of the change were not seen for several years; however, the cloners of IBM's mainframes – or 'plug-compatible mainframe (PCM) manufacturers' – benefitted hugely from unbundling.

The 'clone' machines, so-called, were not like the PCs that plagued IBM's unexpectedly successful entry into the personal computer market, which were practically component-for-component copies of the PC-XT and PC-AT machines (with the exception of the internals of the BIOS firmware). The PCMs were form and function compatible, but often quite different micro-architecturally; but they helped maintain the dominant architecture in mainframes by supporting an ecosystem of hardware manufacturers.

Ultimately, a decline in the market for mainframes in favour of smaller, cheaper machines rather than IBM's attempts to unseat them led to the steady withdrawal of competitors from the marketplace. The launch of IBM's 64bit systems at the start of the millennium forced the withdrawal of remaining large suppliers.

PC cloners continue to prosper, however, and there is no IBM machine in the market today: the former personal computer hardware division is now Lenovo.

Further information

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close