Chip on web

Chipmakers face dual-edged software problem

SoC design calls for more software but that code is expensive to write.

'There is a disruptive transformation happening in the electronics industry,' argued John Bruggeman, chief marketing officer for Cadence Design Systems at the company's recent CDNLive technical conference in Munich. 'And I believe it's being driven by two trends. The first is the economic rollercoaster that we have been on for the last ten years.

Bruggeman continued: 'It's causing electronics manufacturers to think very hard about where they focus their resources. [There are] the things they used to do, but which they are no longer doing - discontinuing development they used to be responsible for. It does not mean that work is not being done; they are expecting someone else to do that work. There is a push down into the supply chain.

'The second trend can be encapsulated by a single device: the iPhone. The iPhone is transformative in and of itself. Apple was able to show that it is not the form factor, how cute it is, how well you connect to the carrier, how reliable the applications are, or how long the battery lives - because Apple is not good at any of those things.'

The iPhone is shipping millions, Bruggeman claimed, 'Because Apple recognises that sustainable advantage is a rich ecosystem that can deliver 185,000 apps.'

Cadence produced a white paper, entitled 'EDA360', designed to remodel the electronic design business around these new realities. At CDNLive, Bruggeman reiterated some of its conclusions: 'We believe this application-driven model will drive a new model of delivering integrated hardware and software platforms. Semiconductor companies will be responsible for delivering those platforms.'

He paused and added: 'I said it, semiconductor companies will deliver software.'

Bruggeman is not alone in believing this. However, his prediction was pre-empted by close to ten years when Bob Krysiak, general manager of STMicroelectronics' Americas operation and then in charge of the company's microprocessor division, told delegates at the IP2000 Europe conference in Edinburgh: 'The semiconductor company is probably the only organisation able to provide the software intellectual property for very complex chips.'

Having steamrollered its way into the market for set-top box chips on the back of system-on-chip devices based around the Inmos transputer, the company was delivering up to one million lines of C code along with the chips, Krysiak claimed.

Companies such as Freescale Semiconductor, Intel, NXP Semiconductor and ST have been developing software for a number of years. Intel has tried to steer the development of multithreaded software with libraries it has developed. The others have produced everything from libraries to baseline applications.

Getting paid for software

'Software is really the key right now,' says Joël Hartmann, who runs technology development for STMicroelectronics' central R&D labs.

Problems for the semiconductor companies have emerged on two fronts: software development is increasingly expensive; and getting paid for the work is hard, if not impossible.

At the International Electronics Forum (IEF) organised by Future Horizons at the beginning of May, Freescale CEO Rich Beyer argued for the importance of software in the chipmaker's portfolio. 'For the first 40 years of semicon, all we did was hardware. Then we started delivering very, very simple drivers: tens or hundreds of lines of code.

'Today, the expectation is not only that we will provide silicon but the operating system, the layer that connects the microcontroller to other parts of the software, the drivers and, in some cases, low-level applications.'

But Beyer conceded getting money for the extra work is tough, if not impossible. 'We have not found out a way to get paid for any software. We try and keep failing. Customers say 'you deliver hardware. It cost you 3 cents in sand; you should charge us 4 cents'. Our customers in the main say: 'If you want us to design in your chips; the software you will have to throw in'.

'Some competitors have bought software companies. Customers will force them to give their software away. It is partly why the industry is going to see larger companies companies having more success in the market,' Beyer argued, on the basis that larger players can amortise the development cost more readily across the larger number of units that they ship.

Beyer told E&T: 'We will continue to drive software and try to monetise it. What will determine if we can monetise is if our industry in fact really starts to differentiate. This is the business. One company can't make that happen. If companies give it away, it is very difficult for one single company to make that change. But the phenomenon that I describe is faced by all of them.'

Not getting paid directly for software is one thing. But the industry is faced with the prospect of watching costs spiral upward unless they can start to bring order to the field of software development. The 2009 edition of the International Technology Roadmap for Semiconductors (ITRS) predicts the cost of software and system-level rising to almost 80 per cent of a portable consumer SoC-development project by 2012. Although this share should fall back with improvements to design techniques, the ITRS does not expect this share to fall below 50 per cent through 2023.

Hossein Yassaie, CEO of graphics processing design specialist Imagination Technologies, said during a panel session on challenges for the chip business at IEF: 'From what we are witnessing, there is no question that the systems being designed today are very, very complex. A lot of it is to do with software. When you put it all together, it takes ages to find a condition.'

The problem of finding software-related bugs in SoC projects is that, before the SoC returns from the fab, everything has to be run on software simulations or hardware emulators. The emulator is typically faster than a workstation-based simulation but is still orders of magnitude slower than the final, manufactured hardware.

In one recent project, Yassaie says: 'It took 21 days on an emulator to find the last remaining bug. Validation tends to take a very long time.'

One option, Yassaie adds, is to invest in higher-performance emulators as well as smaller machines that can be used by software developers. However, greater discipline is needed as well. 'The actual chip being designed is put together a little randomly at times. Tools that help steer engineers towards designing SoCs that make sure that the buses are sufficiently well designed at a high level would be very useful,' he argues.

Beyer cautions that the eye-watering numbers used to illustrate the rising cost of design are for the densest, most complex SoCs implemented on advanced semiconductor processes. Because of exponential rise in design cost with process, Cadence argues in its recent 'EDA360' white paper that an increasing proportion of chipmakers will hold back from moving to the most advanced processes, choosing other ways of implementing systems.

Aart de Geus, chairman and CEO of Synopsys, draws a distinction between what he calls scaled complexity and systemic complexity. Scaled complexity refers to what the chip industry has dealt with for the past 35 years since the introduction of large-scale integration devices.

'Scaled complexity we have jointly mastered for years. But now we have systemic complexity,' says de Geus, referring to the devices that now combine many different processors and memory devices but which are now shipping in millions as consumer products. 'The single biggest new dimension is that one can't think of a chip without software. But despite all the problems of systemic complexity, the spending on it is about $500m, for a problem that costs us way more.'

Yassaie says the nature of chip design has to change to accommodate more efficient software creation. 'People design chips without paying sufficient attention to infrastructure. Each component interacts in its own way.'

The result is that individual pieces of a design, such as a USB controller or a processor, can be checked relatively easily. But things change 'the moment you put it all together', says Yassaie. 'What is making everyone's life difficult is that most chips are moving from working as embedded systems to operating-system based. If you put Android on one of these devices and then debug it, good luck to you. It is a much bigger problem than actual chip design.'

System-level design

The reason why spending on designing for system-level tools is a tiny fraction of the total semiconductor market - which, even after last year's crunch, is still more than $200bn - is that the design-automation companies themselves have found it tough to get paid for their system and software-oriented tools.

In the mid-1990s, hardware-design company Mentor Graphics bought Microtec Research, a provider of software tools, with the aim of cornering the nascent market for hardware-software codesign products. Cadence invested in projects such as Felix with the aim of moving into system design by the end of that decade. But a sizeable market did not materialise.

'We have proposed system-level design for 20 years now,' says Professor Alberto Sangiovanni-Vincentelli of the University of California at Berkeley and a board director of Cadence. 'Many people talked about it but it was all lip service. You have to have a push from the customers - you can tell when design is a real pain and they need it and when it's lip service. Now, people look at system-level design as a differentiating factor. Some people adopt it and the rest will follow.'

Robert Hum, general manager of Mentor's deep-submicron division, said at IEF there is now a clear drive to move towards system-level design. 'To get more productivity there is always the need for an abstraction change.'

Sangiovanni-Vincentelli says the problem is that 'we don't have clear principles to design things such that they are really plug and play.' What has brought the problem into clear focus is that the scale of complexity, which could be measured in terms of hundreds of millions or billions of underlying transistors, has grown immensely in recent years.

Hum says it takes eight years on average for new design techniques to be adopted. As a result, 'In eight years from now, we will be designing chips that have 40 billion transistors on them. And the methodology and the tools being introduced today are the ones that will be used for those projects. The seeds of the future are available today.'

Sangiovanni-Vincentelli argues: 'We have to invent an integration platform to deal with heterogeneity and scale, where we can work with different levels of abstraction.'

Hum says there are a number of fronts on which design teams can tackle the problems of complexity. For starters, he says, 'communication on our chips will be formalised. You have to separate communication mechanism from the function'.

The function of each will be separated from how it talks to other blocks, mirroring the split that was meant to happen with the shift to object-orientation in the software world. The difference in hardware is that the communications protocols will be standardised if not across an industry, at least within a project. 'To do 40-billion-transistor designs you are not going to be able to use ad hoc buses,' Hum argues.

To deal with the problem of slow emulation, the test vectors that drive verification will need to be synthesised and run on the same hardware as the design itself rather than being injected from a workstation. Synopsys has started to use this approach in its prototyping environment, having bought in technology for hardware-based tests with the purchase of Synplicity in 2008.

Hum says reuse will become critical in software. Although multitasking environments complicate system design because of the potential for race conditions and deadlocks, Hum says of Android and similar operating systems that they come 'with a whole bunch of stuff that you don't have to develop for yourself. It is possible to establish a platform and then invent from there'.

At lower levels, improved software design may come from embracing restrictions or winning 'freedom from choice' as Sangiovanni-Vincentelli says.

'To increase the level of abstraction, we have to give something up,' says Hum.

Hum points to the situation with logic design when it moved from designers wiring up gate visually on a schematic to the use of hardware description languages that made it possible to synthesis those gates and interconnections from a less detailed description. Schematic capture allowed the use of asynchronous logic, where a gate could switch at any time. Practical logic synthesis demanded that gates only switch once they received a trigger signal from a regular clock. Only that way could tools check the correctness of a circuit. Analogous approaches have been shown to work in software, argues Hum, using the language Esterel as one example. This forces everything to be described as communicating finite state machines.

'Esterel has, for years, been delivering software automation to the military and aerospace industries,' Hum says. 'Its approach is to create more constraints as, by increasing constraints, you can move to things being automated more.'

Sangiovanni-Vincentelli agrees: 'I would place more attention on correct-by-construction techniques such as those used in automotive and aerospace projects. There you see a move towards time-driven architectures and time-triggered architectures, akin to synchronous design in the hardware world.'

As the upper layers of the platform will need to be compatible with distinctly asynchronous operating systems such as Android, there are limits as to how far this migration can go in the short to medium term. Sangiovanni-Vincentelli says the lesson of methodology changes in electronics system design has been never to change them wholesale but to move them gradually. However, for low-level software used to coordinate the many functions implemented by a cellphone, smart sensors or any other of the products envisaged by chipmakers, constrained and automated design techniques seem to be the only way out of the spiral of escalating cost. Without those savings, the chipmakers are really going to have to work out how to get paid - and the market will not enjoy that.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close