Welcome Your IET account
Sonic Hedgehog Movie

Technology’s Sonic boom

Image credit: Paramount Pictures

Although just a little blue hedgehog, Sonic has inspired a generation of gamers since 1991 and is now a film star in 2020. On this journey, gaming tech has had a profound effect on computing developments.

Sega’s launch of its Mega Drive console in the late 1980s was a near-textbook example of how to get ahead in what had been for years the most cut-throat part of the technology sector. Earlier that decade, Sega had been a leader in arcade games but found that business was hit hard as users found they were just as happy to play at home on simpler computers hooked up to a TV instead of feeding coins to a more capable machine in a busy mall.

There was some consolation in witnessing from the sidelines the slump in home computer sales that took many would-be competitors out of the market in 1985. But Sega was also forced to watch from the sidelines when Nintendo proved to be the company that reaped the biggest rewards of the ensuing recovery. One factor in Nintendo’s success lay in recognisable characters. Italian plumber Mario continues to be a major money spinner for the company 35 years on. But though way behind, Sega’s management was undeterred and set about a strategy by which the company sought to beat Nintendo at its own game, including creating a blue hedgehog mascot that could drive its flagship games.

The first step was one that many followed: to make a technological leap. Sega decided it needed to make the jump from 8-bit processors to one of the more powerful architectures that were then the domain of much more expensive personal computers and workstations. To overcome the cost increase that would normally entail, Sega managed to negotiate a dramatic cut in the price of the Motorola-designed 68000 processor it wanted to bring over from its arcade systems by promising big follow-on orders to Japanese supplier Hitachi.

In other areas, Sega took advantage of the promise of mass-market success to influence the design of the graphics controllers produced by another supplier, Yamaha. This would help give one of the early games its unusual look and feel compared to the other sideways scrolling games of the era. The modifications were simple, but Sega’s management would try to capitalise on them in marketing the console as far more advanced than its competitors.

The innovation lay in the way Sonic and other characters could run over a curved surface and round complete loops, something that helped set the game apart and, in the process, launch an entire franchise. They were not limited to straight lines and jumping over square blocks. That and other hardware modifications were tiny but a clear demonstration of how games hardware and software could come together to make audiences feel they were getting something new.

When Sony’s Playstation arrived less than five years after Sonic’s debut, it signalled an acceleration in the technological warfare that characterised the games market and ultimately would change how computing is done. Rather than tweaking memory controllers, the company aimed to muscle its way into the console market with much more extensive modifications, taking full advantage of the economies of scale made possible by customised silicon and the staff of chip designers it could call on to deliver it at a breakthrough price. At the E3 games conference in 1995, Sony Consumer Entertainment president Steve Race underlined that by simply reading out the price in dollars, “two-ninety-nine”, in what must be the shortest ever launch speech. Sega hurriedly added extra processors to its follow-on Saturn console to try to compete but Sony’s gambit wound up being one template that the games industry would follow.

In place of the two or three semi-custom parts that optimised 2D graphics, Sony worked with LSI Logic to develop system-on-chip (SoC) devices that would integrate a customised microprocessor with hardware acceleration for 3D graphics based on techniques previously confined to high-end engineering workstations and flight simulators.

It did not take long for the PC industry to try to wrest control from the console market. ID Software’s first-person shooter ‘Quake’ helped drive the sales of more capable 3D add-in cards, needed to run the game’s more realistic graphics at smooth frame rates. The card of choice at the time was the Voodoo, made by 3Dfx, which took a similarly hardware-centric approach to acceleration. But even this was short-lived, partly thanks to another release from 1995: the movie ‘Toy Story’.

Pixar’s Renderman 3D engine used on that movie did not have to run in real time, so could avoid the need for special-purpose hardware. The company’s animators used almost 300 Sun workstations, each one running at 100MHz – which was only about three times faster than the MIPS R3000 in the Playstation – to render scenes frame by frame over a period of a month and a half. By writing the rendering engine in software, Pixar could take advantage of much more advanced lighting and shading techniques than were available on fixed-function accelerators.

As it sought to emulate Sony’s success with the original Playstation by another jump in realism, Microsoft complied. The Xbox design team, which included 3D specialist nVidia, were able to take advantage of a near ten-fold improvement in clock speeds and silicon density in the five years since ‘Toy Story’’s release to make software rendering engines viable. To try to keep costs down, the first generation of user-programmable GPUs split the pipeline in half, with a much simpler processor designed for adding textures and transparency at the pixel level, complementing a front-end geometry-oriented processor that could handle the same kind of arithmetic as an Intel Pentium.

This split did not last long as 3D accelerator designers found they could improve overall efficiency by bringing everything into massively parallel engines with a unified pipeline. Although increases in clock speeds stalled by the mid-2000s because of concerns over power consumption, nVidia and its competitors continued to add transistors to exploit the parallelism inherent in graphics. In less than a decade performance soared from a few billion floating point operations per second (flops) of arithmetic power to 300Gflops. Today, performance is measured in teraflops, driven by the economies of scale that come with a mass market like computer games.

As floating-point operations on vectors are so commonplace in engineering, nVidia and others concluded that GPUs could be put to many more uses than rapid-fire motion in a first-person shooter. In the summer of 2008, the Cell processor, which was originally designed for Sony’s third-generation games console, became the main processing engine in the first supercomputer to sustain a performance of 1 petaflop. A couple of years later, the Tianhe-1A at the National Supercomputing Centre in Tianjin, China hit 2.5 petaflops. That was armed with 7,000 nVidia GPUs and 14,000 Intel Xeons. According nVidia it would have needed 50,000 Xeons to achieve the same performance without the GPUs.

Home users got in on the supercomputing act. Stanford University’s Folding@Home project first took advantage of the compute capability of PC GPUs as well as PS3 Cell processors in the mid-2000s. By 2012, GPUs were responsible for almost 90 per cent of the Folding@Home compute throughput. Later on, home users would find their GPUs hijacked by Bitcoin miners using Javascript malware planted on web pages.

However, much more influential on the subsequent development of the GPU was the way in which the technology hit the market at an opportune moment in the history of artificial intelligence (AI). In 2009 researchers at Stanford, again, found GPUs provided the answer to the problem of building production-scale neural networks. Their GPU-powered algorithm was 70 times faster than one written for a dual-core PC. As deep learning took hold, the big social-media and search companies eagerly fitted their cloud servers with GPUs, some of which the chipmakers had begun to optimise for scientific computing rather than games because the numbers sold justified the extra expense of creating that new silicon. That same trend is now taking place in mobile.

Though GPUs can speed up many of the matrix-based arithmetic operations that deep learning needs, they are not necessarily the most power-efficient platforms. As a result, mobile SoC design teams have been designing dedicated AI processors, though they will often live on the same piece of silicon as the GPU, says Rys Sommefeldt, senior product director for PowerVR and high-performance graphics silicon at Imagination Technologies. Though GPUs can run AI, the trend is now one of architectural divergence, he says: “GPUs will be complementary to AI devices.”

At the same time, games developers on both PC and mobile platforms are calling for further increases in the levels of realism they can achieve. In the past couple of years mobile games have progressed from the relatively simple graphics of ‘Angry Birds’ and ‘Candy Crush’ to 3D first-person shooter console favourites like ‘Fortnite’.

As mobile and home-based games platforms become more alike and sophisticated, Imagination and nVidia, among others, expect to see the commercialisation of a technique that was once used to demonstrate the massive parallelism possible with Inmos Transputers in the 1990s: ray tracing. “The difference in quality you get with ray tracing is pretty dramatic,” says Imagination’s chief marketing officer David Harold.

‘There is a huge amount of raw computational power that goes into things like photographic enhancement that would not be there had it not been for games.’

David Harold, Imagination

Although those early demonstrations on Transputers were software-based, much like Renderman, Sommefeldt says the need to handle the operations in real-time makes some level of hardware support a necessity. It is not practical to simply add many more shader cores, although Intel did try to take that approach a decade ago with a GPU-ised version of its x86 architecture, tagged Larrabee. “You can do ray tracing in software. You can do it in general GPU-compute pipelines. But it’s not as power efficient and area efficient,” says Sommefeldt.

Although ray tracing and other advanced rendering tricks may do little to help improve AI performance, these technologies provide more impetus for spreading games technology into new markets. Not only are gaming-oriented hardware accelerators being redeployed in industrial applications for VR, the software used to build 3D games has expanded out from first-person shooters into areas like medicine and industrial design.

The UnrealEngine physics simulator and rendering platform was originally created in the late 1990s by Epic Games for the first-person shooter. Unreal now forms the basis of a training simulator for brain surgery developed at the University of Tokyo and another for orthopaedic operations built by Vancouver start-up Precision OS.

Ray tracing has already been used to create mock-ups, though mostly for still images and pre-rendered videos. One of the purchases that nVidia made more than ten years ago when it started exploring the technology was Berlin-based Mental Images, a specialist in architectural mock-ups rendered using ray tracing.

Today, hardware acceleration makes it possible for engineers and clients to don VR goggles and walk around realistic digital prototypes. These need not just be digital versions of the fibreglass mock-ups that fill the concept-car slots at trade shows. Yao Zhai, automotive product manager at Unity, a competitor to Epic’s UnrealEngine group, says the design of the software makes it possible to take in live sensor data and simulate the behaviour of a vehicle and the environment around it as it drives down the road.

“Unity believes that real-time 3D technologies can significantly improve the development lifecycle of many industrial products and applications. We see tremendous potential growth in all these areas,” says Zhai.

Augmented reality (AR), again driven by 3D acceleration, provides a further set of applications beyond the consumer space. Charlotte Coles, technology analyst at IDTechex, says AR has limited potential in consumer products at least for the moment but is showing greater promise in business: “Training, simulation and increases in efficiency, are just some of the reasons companies are adopting AR and VR technology in their businesses.

“Both AR and VR hardware companies are promoting enterprise applications because the enterprise market currently has a direct use, for example in training or remote assistance, for the technology.”

One prototype developed by Cambridge Consultants based on Microsoft’s HoloLens headset was conceived as a way of giving surgeons a form of X-ray vision while performing keyhole operations. Using live sensor data or models constructed with the help of actual X-rays and magnetic resonance imagery, the surgeon can use the superimposed graphics to better understand the position of blood vessels, bones and organs under the skin without having to make the large incisions they would need traditionally.

The concept of the head-up display is beginning to give AR a boost in consumer applications. Harold says this year’s CES show had a number of examples of these kinds of displays being used in car designs, overlaying information about the road ahead on the windscreen. “Very few people pushing it used the phrase AR, though,” he says.

The pace of development in automotive itself may represent a shift in the balance of power in terms of where silicon makers decide what needs acceleration. Although the games companies did for a while pull the chip industry in their direction, Malcolm Penn, president of industry analysts Future Horizons, points out that the design investment is coming from specialist chipmakers such as AMD and nVidia rather than the console makers. “If you look back at the original Playstation, Sony designed all those chips. Now they are out of the semiconductor market. I can’t see how they would design a PS5,” Penn notes, adding that the core designs themselves change infrequently. The PS4 and Xbox One both launched in 2013 and there is no indication when a PS5 might arrive, though Microsoft has indicated its Scarlett next-generation Xbox would arrive this year.

Although AMD and nVidia are clearly continuing to push the density of their GPUs in concert with Moore’s Law, they are now addressing multiple needs having seized the opportunities presented by AI in sectors such as robotics. Penn sees automotive as a sector that is now likely to exercise more control of the direction of the semiconductor business, and he expects car companies will move into chip design directly to maximise their advantage, taking a leaf out of Apple’s playbook. The console makers lack the incentive and scale to push harder on chip architecture.

“It’s a question of scale. You do need scale to design a chip yourself and enough work to give the designers. That is not happening if you are designing a brand-new games console every five to ten years. The difference in automotive is that they are now upgrading the car design every year. The electronics are more like a smartphone now and the volume is there to be able to justify doing that.”

The experience with digital games shows the power of having a mass market drive technological development. Harold says: “Games are games and they push their own boundaries, but once they have been pushed, whether in computer, a tablet or a phone, the advance is there for everybody. You see applications taking advantage of performance that they would not have access to otherwise. There is a huge amount of raw computational power that goes into things like photographic enhancement that would not be there had it not been for games.”

Sommefeldt adds: “A major reason why CPU, GPU, storage vendors push things forward is because gamers demand extra. Many performance advances were there to service gamers first and foremost; that’s clear on both PC and mobile.”

In areas such as visual rendering, IDTechex’s Coles sees the gaming sector’s input as vital. “It will probably be the gaming industry which pushes the realistic limits of AR and VR,” he says. These improved techniques will then move swiftly into other areas.

As the chip industry pushes through the half-trillion dollar barrier on the back of products that range from smarter cars to phones, games players and their suppliers may do a lot less of the running. But the mass-market power of games led by companies eager to get a technological leap over their competition at every turn has done its job.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them