Farewell to flatland

Farewell to flatland

Chipmakers are reaching the limits with electronic circuits - the only way is up.

Moore's law, as we know it, is living on borrowed time. Technologies that have made it possible to pack more than a billion transistors onto one centimetre-square piece of silicon are reaching the point where they cannot scale much further.

The memory that makes up the cache in today's microprocessors is one example. Some believe that the static random-access memory (SRAM) cannot shrink any further. The problem is that the margins are now so slim that small changes in transistor behaviour can stop a proportion of the SRAM cells from working. Most solutions to the variability incur overhead: they make the cell bigger. One option is to go to a more reliable eight-transistor design, but that will be a third bigger than today's cells. That is not good news when you are trying to reduce the cell's size with each process generation.

To deal with the problem at the 45nm node, companies such as Intel and TSMC made the cell wider. Historically, SRAM cells were more or less square. Now they are very fat and squat. It turns out that, if you can make the transistors wide, albeit very short, they work more reliably. But reshaping the cell only gets you so far and the size of the cell is important.

SRAM makes up about half the area of a high-end microprocessor. And, in some cases, it accounts for a lot more. Stop shrinking the SRAM and you pretty much stop shrinking the processor. Does it mean Moore's Law is getting close to its end?

There is a direction the industry could take.

"I think Moores Law was a little vague in its original formulation," says Ian Phillips, principal staff engineer at ARM. The alternative to making everything take up less area is to start building up - move the electronic circuit into the third dimension. "It is likely that we will see 3D integration and other types of integration in the future," he adds.

"It is an evolution that is going to happen," agrees Mike Shapiro, chief engineer for 3D technologies at IBM and publicity chair for the International Interconnect Technology Conference (IITC).

The chips are... up?

In one area, 3D is already here and shipping in high volume. If you have a flash-based MP3 player with more than a couple of gigabytes of storage in it, the chances are that it uses memory chips arranged in a stack inside.

"Today, there is essentially 3D on the market - with wire-bonded stacks," says Shapiro.

The move to stacked complete chips on top of each other started in the cellphone market. Pressure to reduce the amount of printed-circuit board (PCB) space led memory makers to stop providing each memory device individually but put them on top of each other. They could combine different types of memory in the same plastic package and sell it as one unit. One common offering was to put non-volatile flash together with dynamic random access memory (DRAM). The flash would hold the software code and user data while the phone was switched off. When it booted, much of the software would be copied into the DRAM and run from there because the volatile memory offered faster accesses.

Since then, memory makers have stacked more and more flash devices on top of each other to cope with the demand for media storage. The Apple iPhone, for example, uses four chips stacked on top of each other to be able to provide space for 8GB of music and video. The technique has proved so successful that the iPhone uses quite a few different stacked packages. Even the baseband processor that handles the phone calls sits in a stack.

One factor that has led to the rapid take-up of stacked chips in phones and media players is that it is relatively simple to do. The equipment used to stack the chips and link them together is based on the same kit that has been putting silicon dice into plastic packages for the last
40 years.

Another advantage is that there is no need to redesign the chips themselves to cope with stacking. Manufacturers use the same parts on their own or in stacks. This has provided the flash makers with much more flexibility than they had in the past to supply a broad range of memory densities for different products. The only clue you get as to whether a flash part is a stack, unless you check, is the price. If an 8GB part is about twice the price of a 4GB device, the chances are that it is a stack. If that price drops suddenly, it marks a shift to a single-die product due, in most cases, to
the manufacturer's move to the next process node.

"I think the trend of using wirebond stacks will continue for devices with lower I/O counts," Shapiro predicts.

The reason for wirebond stacks being restricted to devices that do not need many I/O connections between chips is that you only have the perimeter of the chip available for making the connections. There is no way to get inside the sandwich using conventional techniques. Luckily, memory devices do not need large I/O counts. Processors and complex system-on-chip (SoC) devices, on the other hand, do. They need to be able to hook up to a variety of different types of memory, other processors and all the analogue I/O chips. Pretty soon, you run out of space on the perimeter for all those connections.

What you can do is put the processor at the bottom of the stack so that you can cover its entire surface with I/O pads: this is the essence of the flip-chip package. These pads connect to the PCB. You can then stack other devices on top, such as memories, using wirebonds, just as long as they do not need wide buses. Memory technologies such as Rambus' XDR allow you to obtain high datarates with narrow buses. But, ultimately, for stacked chips to proceed much further, you need another way of making connections through the stack.

Through-silicon via

The through-silicon via effectively turns the die into one layer of a PCB. By forming contacts through the silicon die on which the transistors and other circuits sit, you can provide direct connections between chips sitting underneath. Through-silicon vias are not new: power transistors have been making use of the technique for years to provide much better isolation than is possible when you put all of the contacts on the same surface. However, the holes you need to create are much bigger than those planned for interconnecting complex digital devices. But it is a technique that companies are beginning to employ.

One example is a family of image sensors that Toshiba launched in the autumn. As with stacked memories, a big target market is the cellphone. For the new modules, Toshiba stacked the image sensor on top of a processing chip to produce a smaller camera module. To form the contacts between the two chips, the company drilled vias through the image sensor.

Because you can distribute through-silicon vias across the entire surface of the die, you can have very wide buses running between chips in a stack. "You are looking at higher bandwidth connections between chips," says Shapiro.

The big problem with making through-silicon vias is not creating the holes, but filling them with conducting material (see box, ‘Digging holes'). Issues with the the filling processes place a minimum size on the holes themselves. The holes will interfere with the layout and reduce chip density. In deep-submicron processes, the hole for a through-silicon via is potentially as big as a bond pad for a conventionally wire-bonded chip. But it's in the middle of the die.

According to Scott Pozder, a researcher with Freescale Semiconductor, the minimum practical diameter for a through-silicon via for bonded wafers is around 1µm, and likely to be somewhat larger, with a pitch of around 3µm. These vias are way larger than those that link transistors together on the surface of a regular planar die. On a 65nm process, they are no bigger than 100nm on the densest layers, although they do get bigger as you move up the metal stack.

And manufacturers have the issue of deciding whether to stack at the wafer or the chip level. The wafer option looks simpler and cheaper at first glance. But you run the risk of putting good chips on top of duds. Every wafer will contain, with any luck, only a small proportion of failed chips. Stacking wafers on top of each other means you multiply the chances of building stacks that do not work. Even at 90 per cent yield per wafer, a four-deep stack will mean you end up with a third of your stacks being duds.

Hidden costs

Making the stacks at the chip level means you can first test all of your chips and then stack them. But it is potentially more time-consuming and, therefore, expensive because you have many more manufacturing operations to perform.

One option is to have the vias shaped so that they assist in a self-assembly process. Potentially, you can modify the surface of each chip so that they only lock together in the right orientation and in the right place.

To improve density, if you can make the layers ultrathin, you can bring the width of the via down to the size you see in the metal stack of a regular planar chip. This was the approach taken by Anna Topol's group at IBM a couple of years ago. The company has developed a number of sophisticated techniques that make it possible to build layers on separate wafers, then take the micron-thick films and place them on top of each other to form a complete 3D chip (see boxout, ‘Captured on film').

Topol claimed at the International Electron Device Meeting in the winter of 2005 that they had made 3D circuits with vias as small as 140nm across. With these structures, it was possible to place sub-circuits on top of each other.

Cost is a significant factor in this type process. As layer transfer is in its infancy, it is hard to gauge how expensive it could be. But, as it calls for two wafers to begin with, it seems likely that the materials cost would be at least double that of a conventional planar structure. That cost would be partially outweighed by the density improvement of stacking sub-circuits on top of each other.

There is an alternative, Samsung has found. Instead of taking a ready-made silicon substrate in the form of a wafer and going through the process of thinning it down, placing it on a separate ‘handle' wafer, attaching it to another wafer and then forming the connections,  Samsung engineers decided to grow the silicon surface in place.

The first steps are just like any other silicon process. In the S3 process, the engineers form a regular array of transistors and an initial layer of interconnect to wire them together. They polish the surface flat, as with any other modern semiconductor process. But then, instead of putting on another layer of metal interconnect, they grow a layer of silicon several micrometres thick. They then form a second layer of transistors, cut holes in the surface to wire up the lower layer and then continue putting on the metal layers that will form the complete array of circuits.

Samsung has built both experimental SRAM and flash memory devices using S3. Although it is potentially cheaper than any of the other techniques, growing layers of transistors in situ is not entirely straightforward. You have the problem that the processes used to form transistors involve high temperatures - interconnect layers suffer badly at those temperatures.

The high temperatures are used to anneal the silicon surface after it has been damaged by the processes needed to bury dopant atoms deep beneath the surface of the silicon. One option is to use lasers to cook only parts of the second silicon substrate so that the layers underneath are not effective. But this is more expensive than just putting the wafer in an oven and cooking it - today's method.

The other option, favoured by Samsung, is to use lower-temperature processes to form the transistors. There is a potential downside in that lower temperatures may not lead to optimally performing transistors. However, bulk memories do not generally need high-performance transistors.

In the short-term, anyone planning to go into the third dimension for anything other than memories will have to contend with a design gap. "Eventually, we will go to 3D LSI [large-scale integration]. But there are no sufficient tools to simulate for 3D LSI," says Oh-Hyun Kwon, president of the Samsung's system large-scale integration (LSI) division.

Although 3D integration still has its problems, the time is coming when the only way the semiconductor industry can go is up.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close