The next decade will see the 50-year odyssey of silicon scaling draw to a close. But it’s not the end of the road for cheaper electronics.
Last summer, Len Jelinek, director and chief analyst at iSuppli, stuck his neck out and called the end of Moore’s Law as an economic driver in 2014. Pause for a moment over that claim. It was not that Moore’s Law would not necessarily end in 2014 but the economic imperative for scaling the dimensions further on silicon would come to a screeching halt. Development per se need not stop. The problem is that rising cost could so easily outweigh the advantages of going further than the 20nm or 18nm node.
It should probably be called the Moore-Noyce Law because it was Bob Noyce, Moore’s colleague at Fairchild and then Intel, who came up with the pricing model that meant Moore’s Law became the key to predicting a market sector driven by deflation. But if that pricing model - which has typically meant close to a halving in production cost with each generation shift - fails, then reasons for pushing ahead on scaling also break down.
Intel technologists are more bullish about the future than Jelinek. Senior Intel engineer Kelin Kuhn says the company can see a way to 15nm and beyond. The company has a high-volume business that depends on continued scaling - and a continued demand for compute horsepower - but Intel has a public roadmap that extends out to the middle of the decade, with no stop sign at 2015. By that point, Intel expects to introduce an 11nm process.
Five years ago, NEC developed a transistor that was just 5nm long and which switched from conducting to conducting slightly less as the leakage was so high. But it had a performance curve that confirmed transistor behaviour, demonstrating that a 5nm transistor could be made, even if it might not be desirable.
Jelinek’s argument was made on the basis of cost: “The usable limit for semiconductor process technology will be reached when chip process geometries shrink to be smaller than 20nm, to 18nm nodes. At those nodes, the industry will get to the point where semiconductor manufacturing tools are too expensive to depreciate with volume production. That is, their costs will be so high, that the value of their lifetime productivity can never justify it.”
Non-recurrent engineering wafer costs are already pricing users out of the most advanced nodes. For RF specialist Elonics, 130nm provides the best fit because the company’s analogue circuits will not shrink even with a halving of the process geometry. And, as CEO and founder David Srodzinski points out, the wafer cost of a 45nm process is three times that of the process the company uses today. Unless you can take the benefit of smaller circuit sizes - something that is only more or less guaranteed for digital circuitry - a move to 45nm is a very costly option. This increase in wafer cost is one of the reasons why Jelinek sees a gradual falling away of chipmakers as process development moves ahead.
Many system-on-chip (SoC) designs have now reached the point where the decision to integrate is no longer easy. Very often, it is worth having two chips in a design because that offers the flexibility to adapt to changing market circumstances. For example, with a split applications and baseband processor, a phone manufacturer has more options. And the baseband processor may not benefit from integrating the surrounding analogue circuitry because, even with a shift to a more advanced process, the chip winds up getting bigger.
Conversely, while there is a demand for memory greater compute power - and if that power can be provided by deploying more and more processor cores - companies such as Intel are going to keep making those parts, and using process scaling as long as they can to bring the cost down.
The end of Moore’s Law has been forecast so often that the claims are treated only slightly more seriously than predictions of the world’s imminent end. In the 1980s, a bunch of researchers reckoned 1µm was the limit. The story is told in much the same way teachers recount how Victorians feared that riding in trains at more than 30mph would tear their heads off.
However, one important point about Moore’s Law is that it isn’t very detailed. When Moore plotted some graphs and extrapolated in the mid-1960s, there wasn’t any such thing as a ‘process node’. There was no International Technology Roadmap for Semiconductors (ITRS) to tell you what the half-pitch measurement for the first metal layer was expected to be.
Moore explained later that the actual reduction in size of the transistors and circuits was responsible for just one-third of the increase in chip function per dollar every two years. It started off as a doubling every year but this soon levelled out by the time Moore gave a detailed analysis of the graph he drew in the mid-1970s. The other two-thirds came from an increase in chip size and improvements in design techniques. Although the electronics industry was one of the first to employ computers for design, most layout was done by hand on sheets of plastic even ten years into the Moore’s Law period.
What has happened since is that the one-third down to shrinkage in two dimensions now accounts for the bulk of the biennial improvement in density. So, it’s easy to equate Moore’s Law with process technology. But there is no reason for things to stay that way. The original graph only plots two things: ‘number of components per function’ versus time. There is no declaration of how any of that is actually to be achieved.
As a result, as long as the industry keeps delivering cost reductions, no matter how they are achieved, no-one is going to look that closely at whether they are adhering to Moore’s 30-year-old definitions. Inventor Ray Kurzweil has used that to extrapolate the curve out in both directions, into the dim and distant past of the thermionic valve and the punched card, and forward E F into as-yet untried technologies - and very few are as promising as silicon right now. From Kurzweil’s point of view, all that matters is the exponential progression.
One strong candidate for extending the life of silicon beyond the end of 2D scaling is to move into the third dimension (see box ‘Build high’).
The key question is: how much will scaling have slowed by 2020? A number of technologists believe that there will be a hiatus after the introduction of 22nm processes for the simple reason that the shift to 18nm will demand infrastructural changes, such as the replacement of 193nm lithography with extreme ultraviolet (see box ‘Billionaires’ club’).
“The industry could stick at 22nm for some time,” says Jen-Hsun Huang, CEO of nVidia.
One step beyond
It does not mean process development will stop, just that what engineers do to improve density will change. Instead of simply making the transistors ever smaller, companies may work on tweaking devices for performance or allow for improvements in density through better design. “We could call it 22F, F for fast,” quips Sani Nassif, manager of tools and technology at IBM’s Austin research lab.
Although it is a reasonably safe bet that 2D scaling in silicon will be running out of steam by the end of the decade, it is possible that progress will have slowed to the point that the smallest practical silicon transistors are yet to be made.
It only takes an extension from a two- to a three-year cycle for 2020’s leading-edge process to have gate lengths substantially longer than NEC’s experimental 5nm device. But new, stacked memory technologies are likely to have arrived that keep density apparently on-track, despite a slowing in progress in 2D.
Moore’s Law is dead. Long live Moore’s Law.