Some Intel lab dudes, yesterday, frankly looking ridiculous
Comment

Chipmakers look to machines to help the struggle with cost

Image credit: Intel

Intel’s trouble with its 10nm process is only the tip of the iceberg. The chipmaking industry has a bigger problem.

Way back in 2015, Intel promised it would deliver the first of its processors on a 10nm process. The move would boost the density of its chips by 2.7x over the current 14nm technology. Several years on and the 10nm node is still in development, with a planned launch date some time next year.

At the manufacturing level, Moore’s Law has run into severe difficulties, though the numerological games played by different manufacturers has made it look as though the trend of a doubling in chip density every two years is on track. There is every chance the industry will run out of nanometres for node names long before the actual limits of physics hit.

Samsung, for example, promises a 7nm process for next year that will be closely followed by 5nm and 4nm. However, that 7nm process has key measurements, such as the spacing between metal-interconnect lines, that are behind Intel’s projections for its 10nm node. To be fair, Samsung’s own 10nm is a little ahead of Intel’s 14nm on the same metrics. Overall in manufacturing, though, things have been slipping but have been papered over with a certain amount of lexical legerdemain.

At the Design Automation Conference (DAC) in San Francisco last week (27 June 2018), Professor David Brooks of Harvard University, showed an analysis of scaling since 2008. “The numbers look OK, but the cumulative loss over the past ten years is two-and-a-half times,” he said. “That’s a big problem.”

It’s not the biggest problem that the industry faces. We will know when economic scaling has slowed down too much to be useful: the makers of smartphone processors will simply stop migrating to each new node as soon as it becomes available. Right now, that does not look to be likely. The bigger problem lies in the cost of design. Thanks to the rise of technologies such as machine learning, hardware design at the silicon level has become much more important. Conventional processors do not fare well with technologies such as deep learning because they are too power hungry. Users need hardware substrates that are better tuned for these new workloads.

If the work on tuned processors is left to the likes of deep-pocketed web giants such as Amazon, Baidu, Facebook and Google, then the cost of hardware engineering is not prohibitive. However, for smaller players to get involved, things have to change. It is a problem that exercises Darpa, the US defence-research agency, a great deal.

Andreas Olofsson, programme manager in Darpa’s microsystems technology office, points to Gordon Moore’s seminal article in the April 19, 1965 edition of Electronics – the one that led to Moore’s Law. Tucked away on the third page was a section headed: “Day of reckoning”. This was the point where manufacturing could fit more transistors on a chip, but nobody could put together a design cheaply enough to make use of them.

Professor Andrew Kahng of the University of California at San Diego, says: “Wafer cost almost always ends up a nickel or dime per square millimetre. But the design cost of that square millimetre is out of control.”

In his keynote at DAC, IBM vice president for AI Dario Gil said the design cycle for a given large-scale project may last for years, which is a problem in fast-moving, hardware-enabled areas such as machine learning. These projects are manually intensive and have to go through numerous iterations of simulation on server farms to check whether the final design will work in-silico. “Given that there is a renaissance going on in the world of AI, increasing automation in design is incredibly important,” Gil said.

Last year, Darpa has put together several programs under the banner of “page three”, referring to that section of Moore’s 1965 article. “The objective very simply is to create a no-human-in-the-loop 24-hour turnaround layout generator for system on chips, system in packages and printed circuit boards,” Olofsson says.

This is not likely to be easy although Olofsson reckons an experimental “silicon compiler” could be producing workable chip designs within a couple of years. Kahng says the key problem lies in the unpredictability of design. This is why teams today have to keep iterating things. Small changes in tool settings can lead to big differences in die area or performance. He points to the 14nm finFET implementation of the Pulpino SoC, a research device based on the open-source RISC-V architecture. A frequency change of just 10MHz on a 1GHz target can lead to an area increase of 6 per cent.

Machine learning – the target of many of the designs projected by Darpa – may provide much of the breakthrough needed. Gil points to experiments in automated design as part of the SysTunSys project at IBM as one possible approach. The software would run many synthesis jobs in parallel with different parameters to try to find sweet spots automatically. Applied to one design, the technique sped up problematic circuits by 36 per cent and cut power by 7 per cent. “This was after the experts had done the best they could,” Gil claims.

Kahng proposes the similar idea of the “multi-armed bandit”, where arrays of computers try different approaches in a random walk manner, each time trying to get closer to the target. The key problem is killing simulations or implementation steps that get stuck. For this, a strategy modelled on blackjack seems to offer a viable approach with the refinement of waiting for three negative signals before completely killing a job that looks to be unpromising.

Every time they roll the dice, the simulators can learn which parameters have the greatest effect. There are other ways to apply machine learning. Using techniques such as design of experiments to gather enough data to understand patterns of behaviour, software can build fast models for analysing circuits. A key problem is how timing changes as temperature, voltage and the transistors themselves vary. This kind of analysis is incredibly long-winded when performed laboriously for each circuit. Models that employ machine learning can speed up execution dramatically and make it far easier for automated tools to pick logic that works best in a given situation.

Beyond the technological challenges of building the silicon compiler, there are social factors. Kahng said the sharing of data would be critical in making automated design successful, which may prove to be a stumbling block. The electronics design industry is not known for its willingness to share. Companies have been slow to adopt cloud computing because of fear over inadvertently disclosing intellectual property. They may have to start giving things away in order to stay in the business.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close