Shrinking without shrinking

Moore's Law is dead. Long live Moore's Law.

The International Electron Device Meeting (IEDM) - which normally takes place in San Francisco in December, but which shifted to a completely online presence in 2020 - normally provides a convenient place to examine how the chipmaking industry is likely to deliver on its longstanding promise to pack more stuff into a silicon die as time goes on. The move to a virtual conference coincided with some increased clarity as to the direction in which the industry is going to move.

Since the early 2000s, scaling according to Moore's Law has not been nearly as straightforward as it had been for the previous 40 years, so the idea that it would slow, if not stall, is far from surprising. Up to that change 15 years ago, which led to a sudden stall in the increase in clock speeds in microprocessors, Dennard scaling was the norm. This stipulated that, as long as you could draw increasingly fine lines on the surface of a chip, making a transistor smaller would not just allow more to be packed alongside, they would get faster.

Beyond 2004, things got messier because the physics of small objects began to become more troublesome. Process designers compensated for a slowdown in performance gains by introducing strain into the silicon lattice, which helped make electrons move more quickly. They also needed to handle the effects of stray electric fields that would keep a transistor on when it should be off. This ushered in more complex shapes such as the finFET. Soon, that will be replaced by stacks of nanoribbons.

Even with these increasingly expensive tricks, there is a limit as to how short you can make the channel of a transistor before it just becomes a resistor. Although the number of nanometres used to describe each successive process, such as 22nm, 14nm or 10nm, have fallen in line with Moore’s Law, the actual on-chip dimensions are a lot longer.

The node names have not been associated with the size of the transistor itself for some time. To push speed in the 1990s, Intel and others drove the channel length down and so the node name was related to the pitch of the lowest, densest layer of interconnect. Normally, you took that pitch and halved it. Even here, the metal half-pitch is still way larger than the node’s name. For example, the effective channel length for a transistor on the 7nm node is around 12nm. The metal half-pitch that is so important to device density is in the order of 25nm to 30nm. Even by the so-called 1nm node, this dimension is still not likely to fall far below 20nm, partly because the lithography technology cannot support it but also because it would push electrical resistance to impractical levels.

Yet, analysis by Synopsys and others at IEDM pointed to scaling continuing to progress through 2030, though it is not plain sailing. How so, when the paths to reducing device area have been progressively blocked off? The answer lies in the third dimension. This is not just about the ability to stack chips on top of each other in order to deliver more functions in a specific volume. Quite a lot of restructuring is now going on inside the chip to reduce the effective area of each transistor.

Research institute Imec has come up with several ideas. One is the 'forksheet', which brings the two complementary types of transistor used together in logic cells into a single tree-like structure. That cuts effective area by as much as 30 per cent in one go.

Having to provide power and ground lines to a logic cell takes up valuable area that could be used for routing logic signals. Here, the third dimension provides an answer. At the 2018 VLSI Symposia, Imec proposed burying a lot of these rails underneath the silicon surface. Their next step was the CFET: a two-storey structure that goes further than the forksheet by putting one transistor on top of its opposite half. Now, you’ve saved half the area, more or less.

At the 2020 IEDM, Intel's engineers described their take on a CFET-type structure based on nanosheets. The idea is catching on. According to calculations by Synopsys, the CFET structure does a lot for the static memory cells used for on-chip memory, though it takes some circuit-design tweaks to make it work. For example, the most compact structure calls for things like dummy transistors to work properly. Such a change would mean SRAM density improvements could keep pace with those of logic circuits instead of lagging behind, as memory has for close to a decade.

Increased cooperation between circuit and process designers seems likely to keep scaling at the long-term rate of a doubling every two years more or less on track, the Synopsys team claimed. There are costs to all this. The cost per wafer will climb by an average of 13 per cent, per node, over the coming decade because of the complexity of these new processes and the eyewatering cost of some of the equipment needed to deploy them.

Even with that, thanks to the density improvements, that means chipmakers would realise a 30 per cent reduction in cost per transistor, per node, over the same timeframe. On top of this you have the problem that chips that can justify a transistor density of 1 billion per square millimetre cost billions to design, making it even more of a game only for those with the deepest pockets.

Taking all that into account, Moore’s Law seems viable for yet another decade.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles