Electronic film for a telescope

Cheap technologies muscle into niche markets

When humble transistors first appeared they were too expensive for consumer use. Now silicon technology developed for the consumer business has been finding its way into specialised niches once the exclusive preserve of pricey proprietary components.

It might be hard to credit now, but the early history of the transistor did not actually have much to do with the mass market. Up to the 1980s, the story of the electronics industry was one of technologies developed for niche industries trickling out into the wider consumer market.

Early versions of the transistor did find their way into the consumer market, but not on the massive scale to which we have become accustomed. For his keynote speech at the Sophia Antipolis Microelectronics conference in October, ARM CTO Mike Muller searched for an example of an early transistorised product, and found one that celebrated its diamond jubilee this year.

"It was a hearing aid. There were two valves on the input stage, but on the output stage was a transistor - it was used to save power," Muller explained. But, for the following 20 to 30 years, the main beneficiaries of the rapid development in silicon technology were in high-end computers and military systems. The reason? Cost.

Six years after the introduction of the world's first transistorised hearing aid, the founders of Fairchild Semiconductor were deeply concerned as to whether they could fulfil an order for silicon transistors from the IBM group developing an airborne computer for the aborted B70 bomber programme, even at $150 apiece. The so-called 'Traitorous Eight' who left transistor co-inventor Bill Shockley's eponymous company to found Fairchild's chip-making division believed silicon was a superior material to the germanium used by competitors at the time; but the technology proved difficult to master and Fairchild almost failed to deliver.

Several years later, Fairchild realised there was a burgeoning consumer market for silicon transistor-based products; but not at $150 each. Transistor prices dropped quickly - down to around $20 by the start of the 1960s. Yet that was still too much.

Fairchild co-founder Bob Noyce decided to take a chance that would set in motion the deflationary nature of the chip-making industry. Malcolm Penn, president of market analyst firm Future Horizons, says that when asked whether the company could supply transistors at $1.50 apiece, "Noyce replied, 'just take the order Jerry, we'll figure out how later'". It would not take very long for the cost-price ratio to swap over and for the same thing to happen with integrated circuits, which put multiple transistors on to one die.

Chronicling Noyce's life many years later, author Leslie Berlin wrote: "In effect, Noyce was betting Fairchild's bottom line against two hunches. He suspected that if integrated circuits could make their way into the market, customers would prefer them to discrete components' He also calculated that as Fairchild built more and more circuits, experience curves and economies of scale would enable the company eventually to build the circuits for so little that it would be possible to make a profit even on the seemingly ridiculous low price."

The semiconductor learning curve is arguably the most dramatic demonstration of how the ability to practise with very high volumes can push up manufacturing quality and production yield. In turn, costs plummet, opening the way for mainstream technologies to move out of their intended target markets and into niches previously dominated by more expensive and arguably more suitable technologies but which lack the volume to drive cost or quality.

At the start of the 1970s, Noyce and several of his colleagues, fed up with the way the parent company handled its semiconductor subsidiary, started Intel and developed the first products that would begin to reverse the trend of high-technology products moving from well-funded niche industries into the consumer space.

The most obvious example is the standard microprocessor, originally developed for Busicom's first electronic desktop calculator. Intel worked on the basis that sharing logic circuits, and putting them under software control, consumed far less silicon area than trying to hardwire the different operations that a calculator needs to implement.

It took decades for many hardware engineers to accept that a microprocessor could be more efficient than dedicated logic; these days it is hard to find an electronic product that does not contain one or more. Once relegated to the status of I/O controllers for servers, microprocessors now form the heart of practically every modern computer, from the smallest to the largest.

See CCD, see the future

A second mainstream product fared less well, but trickled its way up into some of the most sensitive cameras and telescopes ever made. Invented at Bell Labs in the late 1960s, Intel's Gordon Moore in the mid-1970s saw the charge-coupled device (CCD) as the main driver for his eponymous law. At the time, the CCD represented the best chance of creating a high-capacity solid-state memory at low cost.

However, the CCD memories were highly susceptible to radiation - alpha particles generated by cosmic rays could easily flip the state of a cell. On top of that, because data had to be read and written serially and not randomly as with other memory devices, CCDs were difficult to use. Within a few years, the CCD was finished.

Then its susceptibility to radiation marked out a new life for the CCD as 'electronic film'. The first digital cameras were based on CCDs. Although CMOS imagers based on the technology used in the DRAM memory that supplanted the CCD in the 1970s now prevail in consumer electronic cameras, the CCD has found a niche in scientific instruments. Typically, CCD imagers generate less noise internally than CMOS (complementary metal-oxide-semiconductor) versions. Most telescopes in astronomy are based on CCD technologies, often arrayed into very large sensors. One of the biggest should be launched in 2018 by the European Space Agency (ESA).

ESA commissioned UK-based specialist E2v to produce more than 100 CCD imagers that will cover an area of almost a square metre on the PLATO probe - a mission that will attempt to detect exoplanets as they cross in front of their host stars. Where it took mainstream failure to open doors for the CCD, reverse trickle-down technologies now very much rely on mass-market to move into high-end niches.

"The tables are being turned with commodity technology driving high-end technology," Tony King-Smith, vice president of marketing at UK-based graphics-processing specialist Imagination Technologies, explained at the International Electronics Forum held in Bratislava, Slovakia, in October 2012. "Supercomputing in a mobile phone? We are getting there very quickly."

Processing performance

King-Smith sees the graphics processing unit (GPU) as the driver. Over the past decade, GPUs have moved away from being fixed-function devices, designed purely to draw and paint 3D objects as large collections of triangles that, when assembled, create a realistic image. Most GPUs are now highly programmable and, although they lack some of the attributes of a full microprocessor, they can run large subsets of languages such as C. The target application for GPUs has driven up their aggregate performance dramatically.

"To get the throughput needed for 3D, the GPU has to be a parallel processor," says King-Smith. "The rate of increase in processor power of GPUs is dwarfing that of CPUs."

Those increases in processing performance are helping to expand GPUs from their old home in gamers' PCs to phones and other devices. "GPUs are going into set-top boxes. Every CPU [chip] will have a high-performance GPU on them," King-Smith adds. "We are entering an era of mass-market parallel processing. It's not supercomputing yet, but it is providing hundreds of gigaflops."

GPUs have historically been difficult to program, relying on specialised low-level languages designed primarily for 3D graphics. The Cuda language - a version of C developed by GPU supplier nVidia - opened up the GPU to a much wider range of applications with the result that the chips are used in supercomputers for weather forecasting, surveying for oil and gas, and many other specialist uses.

The issue with Cuda is that it is limited to nVidia GPUs. So, the GPU makers and others are pinning their hopes on a less silicon-specific language, OpenCL, as the driver for GPU-powered computing.

"It offers an industry-standard way of programming the GPUs," says King-Smith. "We are at a major inflexion point. It's starting with mobile devices. But the technologies from mobile devices are finding their way into advanced devices. Cloud computing will rely on mobile technologies because they are so power efficient."

GPUs are not the only devices potentially to benefit from OpenCL. Companies such as Altera, which make field programmable gate arrays (FPGAs) - devices that have hardware that can be almost entirely reconfigured on the fly - expect OpenCL to expand the market for their devices into high-end computing.

Adacsys, a startup founded by Erik Hochapfel, is using Altera's devices to make specialised computers for the finance and aerospace industries that will be programmed using OpenCL. Like King-Smith, Hochapfel expects a number of these systems to be deployed in the Cloud to offer "application acceleration as a service. It means there will be no need for any hardware in-house".

Further information

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close