Increasing the regularity of chip designs may reduce problems in manufacturing.
How bad is the problem? Current photolithography systems use 193nm light to pattern the photoresists that define chip features. But today's most advanced chips are made on processes with minimum dimensions of 45nm, or about a quarter of the illuminating wavelength. Some foundries will start offering 32nm processes, for customers to use at their own risk, next year. Beyond that, 22nm processes are in development so that the industry can keep up with the hard taskmaster that is Moore's Law.
Various techniques have been used to push processes this far past the point where diffraction effects kick in. These have included using phase-shift masking to turn diffraction to the designers' advantage, and optical proximity correction (OPC) techniques, such as adding 'ears' to the corners of layout elements so that they are printed as right angles. The problem is that chipmakers are applying ever more Byzantine combinations of these techniques to keep up with shrinking process dimensions, and getting diminishing returns. Device characteristics are varying wildly as more aggressive OPC techniques are pressed into action. The result has been declining yields and declining performance gains from each new process generation, as design margins have widened to cope with optically induced device variations.
The chip equipment industry has been working to solve this for years, but couldn't persuade its customers that moving to 157nm illumination sources was worth the development costs given the marginal advantages it would bring over 193nm light. Is there any light at the end of this tunnel? It's not looking good.
There are soft X-rays, also known as extreme ultraviolet (EUV) sources, which would solve the diffraction problem at a stroke by offering illumination at 12nm.Unfortunately, the development of EUV lithography is fraught with difficulties. The EUV sources wear out too quickly, the masks are difficult and expensive to make, and the throughput of the prototype steppers is nothing to write home about. No-one now expects EUV lithography to be applied before the 22nm process generation is upon us in a few years' time.
So the equipment companies have been thrown back on improving their 193nm equipment. They've developed lenses with higher numerical apertures that can bend the light ever more aggressively, and put layers of liquid between the lens and the wafer to further enhance the lenses' ability to pattern wafers at small fractions of the illuminating wavelengths. But it appears as if there is an end in sight to what can be done, even with these techniques.
Neil Carney, vice president of marketing at EDA company Tela Innovations, says: "It is getting to the point where the lenses are getting so big that they distort under their own weight."
Joe Sawicki, vice president and general manager of the design-to-silicon division of EDA company Mentor Graphics, says: "Unless we find a new fluid, which is unlikely, or a new light, which is also unlikely, the same inherent resolution that we use for the 32nm process node is what we will use for the 22nm node.
"At 22nm, 100 per cent of the improvement in resolution will come from reticle enhancement techniques, as we move to computationally enabled lithography." But it won't be smooth sailing, according to EDA market analyst Gary Smith.
"At the 65nm and 45nm process nodes the amount of design for manufacturing (DFM) fixes is becoming unbearable," he says. "There are two solu--tions to the problem. The first is the new DFM routers. As they are DFM-aware they can run the rule-based checks and ensure that around 80 per cent of the DFM problems are avoided. The layout team can then use model-based DFM to take care of the remaining 20 per cent.
"The other fix is to use restricted design rules (RDRs). So far the designers have pushed back on RDRs, but at 45nm all of the silicon I've seen has at least three RDRs included in the rule deck. At the 32nm and 22nm nodes it will be unrealistic for the layout team to fix all of the problems that escape the DFM routers, because the time and cost [involved in using] DFM tools will make it prohibitive.
"So there will be a significant increase in the number of RDRs used. Anyone that has any back-end experience knows that if the physical libraries complied with DFM rules, in all configurations, a lot of the DFM problems would go away. DFM tools will still be used but their usage will level off to a manageable point." Smith argues that the way forward will be to rework the basic building blocks of chip design, the standard cell libraries, so that they pass through the highly sensitive lithography process with much greater determinism and repeatability.
"There are a few DFM compliant libraries around, but way too few," Smith says. " IBM has theirs, PDF Solutions has a library available, and now there is Tela."
Why aren't standard-cell libraries compliant with their target manufacturing processes as a matter of course? Part of the reason is that cell libraries are difficult and expensive to develop and maintain, and so they tend to be ported from process to process rather than being rebuilt from scratch each time a process is introduced.
Another reason is that the cell libraries are getting bigger, partly to accommodate high-performance and low-power requirements, and partly to make it easier for synthesis tools to map register-transfer level designs to the library.
Cell library designs have usually been optimised for circuit performance and density, not manufacturability. After all, a micron here and a micron there and pretty soon you're talking about real area, especially if a cell gets used tens of millions of times in a design. So, designers are loathe to make changes to cells.
No obvious pattern
Standard-cell libraries tend to be full of highly optimised cell designs made up of elements that may be L-, S-, Z- or T-shaped and often include diagonal edges and serpentine tracks. It's these irregular features that are becoming particularly difficult to pattern successfully with the highly sensitive lithography systems in use today. There's another problem. Most standard cells are optimised in isolation but are used on a chip next to a wide variety of other cells. Dealing with the multiple possible interactions between these cells, both in the lithographic process and as circuit elements on a common substrate, is a further challenge.
Density is key to both SRAM and NAND flash designs, and both are usually among the first designs produced on a new process. One of the reasons that this is possible is that they use highly regular structures of cells that are essentially made up of orthogonal elements. Companies such as Tela Innovations are now trying to bring that kind of regularity to the much richer cell libraries of logic processes, in order to gain the same sort of manufacturability benefits.
"It is very difficult for customers to work out what to draw to get good, yielding silicon," says Scott Becker, CEO of Tela. "At 45nm it is almost impossible to pattern hard corners, and the interactions between shapes are creating variability. So how do we optimise for the process instead of forcing the design on the process?
"You have to tell designers what they have to do, not what they can do."
Design rule manuals already try to do this, but Carney says that for 32nm processes the manuals are now 2,000 pages long, making it tremendously difficult for anyone to fully comprehend the full set of constraints they should take into account during design.
What Tela offers is a way around the problem. It provides a library of 50 context-independent cell topologies that are designed to act as the basis of a revised cell library, and a tool and service to help customers convert their libraries to the new format. Tela claims that its tool can preserve the library designers' intent such as transistor sizing, and even work with complex cell designs that include special features such as strain engineering.
Context independence is a critical part of the architecture. Anyone can ensure that cells don't interact with each other, simply by pushing them far enough apart. Tela claims that its basic cell topologies have been designed so that they can be closely abutted without interacting.
"One of the things we have done to get the area is the way we have architected the cells to abut each other," says Carney. "Typical place and route tools work on the M2 grid. We allow the swapping out of cells in order to get the regularity of cell placements. The prescription for the 'canvas' maintains the regularity." Carney claims the Tela approach can also help reduce in-wafer variability.
"By having a fixed grid for cell design we have been able to cut leakage power 2.5x in one customer design," he says. "Active power is related to on-current, and the capacitance of the lines and the gates being driven. Leakage power is related to transistor shape so we can mitigate that shape-based variability, although not the process-based variability."
Tela's approach doesn't do away with the need for some of the optical tricks that have been used to push processes this far, but Carney argues they can be applied less aggressively.
"You still need optical proximity correction but it is much less complicated," he says. "They [just need] to look at line ends and contact issues." The gridded approach may even enhance the capability of some of the techniques, such as alternate phase-shift shift masking.
"All contacts and vias are on the grid, so you can use sub-resolution techniques on other grid points to reinforce the printing of contacts," Carney said. "It should help reduce the margins in design that people use to guard band their designs."
Tela, which launched its products at the SPIE advanced lithography conference at the beginning of the year, has a handful of customers, including Qualcomm and a Japanese company. Its methodology has already been applied on 45nm chips and on a full place and route of the ARM 926 core, done using a standard place and route tool. However, the move to straight design may bring changes in upstream tools.
"Once you have a predictable base environment things may change in the EDA industry and we may see things that are outside the way things are done now," Becker predicts. "There has to be a better paradigm and a [layout] generator-like architecture may be natural."
Tela's approach offers some hope that the useful working life of optical lithography systems can be extended to cover future process generations that many have thought of as demanding entirely new illumination sources, masks and stepper architectures. In an industry that hates to change more than one variable in its manufacturing processes at a time, this is good news.
But for many chip designs, these issues are moot and will remain so for a while. The majority of chip designs are still being done on 130nm processes, three generations behind the leading edge. And even those being done on more advanced processes may not face as many problems as some predict.
Walter Ng, vice president of design enablement alliances for foundry company Chartered Semiconductor, says: "With all due respect to my EDA colleagues, I hope people don't get the wrong impression. Manufacturing at the leading edge is doing very well. We have introduced new technologies at breakneck speed and that may have contributed to some of the issues.
"There has been a lot of hype in the industry saying that at 90nm and 65nm we will need DFM techniques," he adds. "But with the majority of designs at 90nm and 65nm there are areas of design which need a little more attention than others, but at this time at 65nm manufacturing is quite well controlled and such outliers are rare."