E&T

Moore’s Law 2017: an uphill battle

“I don't think anyone could confidently tell you that they have a plan for 15 more years of Moore's law,” says Greg Yeric, director of future silicon technology for ARM Research.

Moore’s Law, first hypothesised in 1965 by Intel founder Gordon Moore, states that the number of transistors in a dense integrated circuit will double approximately every two years. The shrinking of transistors enables a larger number to be held within the same area, which results in a faster processor that can operate at lower power requirements.

Although the law was adhered to rigidly for half a century, in 2015 Intel admitted that the pace of advancement had started to slow down. Its eighth-generation Core CPUs, codenamed Coffee Lake, are set to launch in the second half of 2017 and will once again be built on the same 14-nanometre (nm) process used three generations prior for its Broadwell chips, originally released in 2014.

What this means for the future of Moore’s Law is currently undetermined. But while processor manufacturers may be facing some roadblocks ahead, they are far from giving up.

“I do think we are approaching the limits of conventional scaling with silicon,” said Yeric, who works for ARM, the Cambridge-headquartered company that designs the architecture that powers the vast majority of processors used in mobile phones and tablets today.

“If you just did this math, you could convince yourself that cost per transistor scaling will soon stop and you’d get pretty pessimistic about the industry’s future.

“The ‘end of Moore’s Law’ is more of a ‘tailing off of the historic Moore’s Law curve’. How far off of 50 per cent transistor cost every two years do we go until it is ‘the end’?”

E&T

Intel's timeline since 2008

Back to basics

A processor is basically composed of a silicon wafer topped with a photoresist layer that breaks down when exposed to ultra-violet light. A solvent is then applied to remove the exposed areas and create trenches in the silicon underneath.

So the light pattern beamed onto the top layer effectively shapes the structure of the underlying silicon, a nanoscale circuit diagram composed of billions of transistors and their interlinking pathways.

The trenches formed are then filled with electricity-conducting ions that allow current to be directed through the transistors, which can act in either an on or off state.

This on/off paradigm effectively creates the binary system that is the basis for all computer code.

E&T

Engineers at a fab assessing a wafer

Roadblocks

The main difficulty currently facing processor manufacturers is the continued miniaturisation of the light pattern that is applied to the photoresist layer.

In the factory, this layer is exposed to incredibly thin beams of laser light to create the pattern. The most advanced processors available today use a 10nm process to form the circuit pattern. But the minimum width of light produced by the lasers used in traditional processor fabrication plants (fabs) is around 193nm. This has been the case for around the last 20 years.

Creating processors at a 10nm scale using a beam of light that is almost twenty times thicker has become an increasingly complex and expensive process, while the insatiable global appetite for the latest in computing technology has pushed traditional fabrication plants to their limits.

Patterning

In order to make this 193nm width light thin enough to create a 10nm chip, manufacturers use a process called multiple patterning.

Effectively this means burning multiple grid-like patterns onto the photoresist layer to produce thinner lanes than would be possible with just one pass.

Yeric compared the process to staring at a light bulb through the gaps in your fingers. With one hand the beam can be filtered to such a degree that only a crack of light can be seen through it. Placing the other hand on top miniaturises this beam even further. The more patterns placed on top of one another, the finer the channels that can be carved onto the chip.

“The problem is we've had to go from double patterning in two layers, to triple patterning in a dozen layers and now we're doing quadruple patterning at 10nm and 7nm and that is just cost that you can't get back,” Yeric said. “This is not a sustainable paradigm.”

Continued refinement of this process will eventually create an exponential curve where chips become prohibitively expensive, more unreliable and take too long to manufacture. Therefore an alternative has to be found.

A thinner laser

In order to reduce the reliance on the increasingly expensive patterning process, a thinner laser has been developed that can apply the pattern in just one pass, eschewing the need for multiple patterning (for now). The “saviour” for the industry, as Yeric describes it, is extreme ultraviolet lithography (EUV).

This technology is capable of producing light as thin as 13nm in width, but it has faced a long, torturous development process ever since early versions of the process were first demonstrated in the 1990s.

E&T

“Making this 13nm light is one of the most complicated engineering feats of modern technology,” Yeric explains. “I'm talking about everything including space exploration. The technology for EUV is mind-bogglingly complex.”

With further clarification, it’s easy to see why he thinks this.

“First of all, 13nm light doesn't go through anything, it gets absorbed by anything, even glass. You can make the photosensitive layers that absorb the light but you have to do it all in a vacuum because the light gets absorbed by the air. So you have to make a gigantic vacuum and that’s expensive.

“To make the light, in the vacuum, you melt tin into tiny little droplets and drop them in very quick succession. You then use a laser to vaporise the droplets which produces the light beam at the right width.”

But this process causes the light to scatter in random directions so a series of atomically perfect mirrors, known as Bragg reflectors, are needed to focus the light the right way and are created from nanometre layers of alternating material.

“Each one of those mirrors is expensive. You have to use special physics to work with 13nm light, they have to be perfect. The specifications of those are so much tighter than, say, the Hubble Telescope mirrors.”

“The problem with this,” Yeric explains, “is that it’s such a brute force tactic that the laser itself needs to be around 25kW.”

In effect, the inherent inefficiencies in the process are so great, that the vast majority of the light produced by the laser is lost. For perspective, some of the most powerful steel-cutting lasers available on the market today are just 5kW.

“So just the laser itself has required a huge engineering development; that’s been one of the delays of EUV technology. When you add it all up, it's $100m per EUV tool. A factory needs more than one; these big factories need many of these things so they can put enough wafers through to make their money back.”

When considering the cost and complexity of constructing an EUV machine, it’s clear why the technology has taken such a long time to be incorporated into mainstream manufacturing processes.

In February Intel announced that it was planning to equip its Fab 42 in Arizona for 7nm manufacturing. The plant has been in construction since 2011 but is still not currently operational.

Although the building itself was completed in 2013, Intel decided to postpone plans to use it as a 14nm plant, as per the company’s original plans, in order to create 10nm chips and ultimately 7nm.

Intel has not announced which technology it intends use in the factory, but it is spending up to $7bn to equip it and has implied that it could start using EUV at the 7nm process. The new process will also give Moore’s Law a window of time where improvements can be predicted further down the line.

“EUV coming through on its roadmap would be a huge enabler to a nice-looking 5nm node and then the ability to scale past 5nm,” Yeric believes.

EUV could eventually open the door to 3.5nm transistors further down the line, but ultimately even this technology will hit the same roadblocks and require multiple patterning in about 5-7 years that will see Moore’s law stutter once again.

Alternative options

But creating smaller transistors is not the only way to increase processor performance.

For decades, improvements in CPU performance were largely achieved by shrinking the area of the integrated circuit. Cramming an ever larger number of transistors in a smaller area brought automatic speed and efficiency improvements.

This saw clock rates, which are effectively how fast the processor can perform, increase by orders of magnitude in the decades of the late 20th century, from several megahertz in the 1980s to several gigahertz in the early 2000s.

But as progress in this area became more difficult, alternatives were being sought out to improve performance without resorting to expensive and difficult miniaturisation processes. In 2005 Intel shipped its first dual-core processor, which represented a significant shift away from the well-trodden processor paradigm of the preceding 40 years.

Multi-core chips are effectively like joining two processors together so that they can work in parallel on different sections of the same workload. While this approach necessitates a different approach for programmers to ensure that they make use of a processor’s full capacity, it nonetheless allowed Intel and other manufacturers to stuff more transistors onto a chip while minimising R&D investment.

Nowadays it is typical for most processors, even those used in low-power mobile devices such as smartphones, to include two, four or even eight cores all running in parallel.

3DIC

3D integrated circuitry (3DIC) is an alternative approach to processor design that could see speed improvements without a smaller manufacturing process, just like the multi-core boom in the mid-2000s.

3DIC is effectively the stacking of parts of a processor on top of one another rather than spreading them across a 2D plane. This can have the effect of making the chip physically smaller and is used in devices where physical space is at a premium, such as the Apple Watch.

But it can also allow the chip to relay information between its memory and its processing cores more efficiently by reducing the travel distance that signals have to take and thereby reducing the resistance and capacitance inherent in longer wires.

Effectively, the further an electric signal has to travel, the more interference it encounters which is ultimately given off as excess heat and can cause errors in the processor’s calculations. Heat is the enemy of processors as it forces them to slow down their operations. Supercomputers for example are often super-cooled so that they can run as fast and efficiently as possible.

“There are high-performance products that you can buy today – one example is from NVidia and there’s one that AMD announced recently – that are using 3DIC to increase the performance of their chip,” Yeric said.

“If you look at a regular monolithic chip, you have a processor that has to go to the memory and get some memory and that's going across wires in a two-dimensional fashion and those wires get fairly long, and as they get longer the chips get slower as resistance and capacitance builds up.”

“So what you see in these products now is that they're taking the memory and stacking it in order to reduce the overall distance that the signals are travelling at.”

This allows the chips to run faster within a given power budget; PC sockets for example have a limited amount of power they can provide. 3DIC maximises this performance under these constraints.

What does the future hold

3DIC and the development of multicore processors allow for increases in speed, improvements in power efficiency, or both by taking an alternative route outside of transistor miniaturisation.

Directed Self Assembly (DSA) is one avenue being explored extensively in research labs as a way to keep Moore’s law going through the traditional process of miniaturisation.

As Yeric explains, the process works like this: “Currently, photoresist polymers land on the [silicon] wafer in strings and the light breaks them up. DSA takes these strings and engineers them.

“They are made of two ends that are joined in the middle. The two ends are like oil and water and don't like to be around each other.

“You make a bunch of strings that are a certain length, one half is red, one half is blue, they will automatically unfold and be straight trying to get away from each other. If you then chuck a bunch of those strings onto the wafer, they will automatically self-assemble into a red-red-blue-blue pattern, which is sized based on how long you have cut the string.”

This would ultimately create a pattern like the one made by the multi-patterning process but would cut down on time and complexity and could improve the reliability of chips which are created using just a one-step, rather than multi-step process.

”So it’s not free, you need one patterning step,” Yeric says, “but it’s not as difficult a patterning step as without the DSA. So there is the potential of a cost saving there where you can reduce multiple patterning and you can just do one pattern that's not that hard and it will work out in your favour.

DSA is currently being heavily researched and is producing some interesting results but Yeric predicts that it’s not a technology that will be used for at least 4-5 years or more.

The battle to keep Moore’s Law afloat is not yet lost, but the fight will get harder from here.

“There is no shortage of new approaches in the design realm that could realise more performance,” Yeric believes. “It’s just that past progress in Moore’s law has been achieved by implementing the easy ones. Now we are faced with more difficult choices.”

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles