Powered down

Low-power design involves so many design tweaks, it needs a verification approach all to itself.

Things have got so bad that companies supplying services to the chip designers now report a willingness, on their clients' part, to 'accept' the risk of of a respin - the multi-million pound nightmare in which a project gets all the way to manufacture and fails. Tom Quan, a senior manager with the world's largest silicon foundry, TSMC, observed as much on the eve of this year's Design Automation Conference (DAC). Senior staff with Synopsys, one of the big EDA vendors, said the same during it. And Wally Rhines, CEO of another EDA titan Mentor Graphics, even raised the spectre of 'endless verification' in a keynote earlier this year.

"If you asked people two years ago what kind of techniques they were going to use for low power, they'd come up with a kind of 'wish-list'. If you
ask them now, they've got a 'need-list', because the chip's coming out next year and they just need to get it done," says Phil Dworsky, director of strategic alliances at Synopsys.

So what happened so quickly? One illustration goes back further than 2006 to the first chip designs for 3G handsets. In most cases, the various blocks within the system operated at the same voltage. Or, if there were different voltages, they were for different pieces of silicon at different spots on a printed circuit board - still a delicate power management task, but now compare that scenario to the current state of play.

A 3G chip designed in 2008 is likely to have multiple blocks running at multiple voltages on the same chip - indeed there are projects out there with more than 30 so-called 'power islands' on them, all on one chip and quite possibly buried under nine or so layers of metal.

The verification task must address all the different voltage combinations, the potential for interference between the functional blocks as well as communication, and the complex sequences that will apply as, say, one block is powered up and another down. There are level shifters, isolation cells, and power switches - three newer elements within the design that contribute to its overall power and logic management.

Chances may well be that the design started out as a 'proof of concept' project where the low-power requirements were not as high up the agenda as bundling various types of functionality. In short, much of the stuff that is really hard to verify may also have got there after some retrofitting.

"It is a complex, nebulous mess of hardware and software that just needs to work together," says Srikanth Jadcherla, group director of R&D in the verification group at Synopsys. "Multiple power states, transitions and sequences that were not relevant in the past have suddenly become very relevant indeed. A simple way of looking at it is to say that today there are so many more ways for you to shoot yourself in the foot."

Taking a view

The need to step back and take a methodological approach - certainly, some form of high level view - to controlling this type of project is self-evident. The problem is where to start.

Jadcherla and his colleague Krishna Balachandran, director of product marketing at Synopsys, break down the low-power design community into three main groups.

"There are those who are facing these issues for the first time and they're very nervous about using power management because they think that it's going to stop the chip working - and they've no idea how to go about verifying the design," says Balachandran.

"The next part of the spectrum is those companies that may have done a little low power work but know they need something. But the problem is they still believe that static checks will solve the problem. They're not really thinking about all the dynamic effects. What, for example, about the voltage changes during the design. Of course, the other thing that's amazing is that some of these people actually plan on a respin or two.

"Then there are the savvy customers. They want to know all the detail about every switch in the tool so that they can use it and they can maximise it, and really protect themselves."

The gamble Synopsys is taking is that there are enough 'savvy' players already for its next play in the low power space: specifically, a dedicated version of the Verification Methodology Manual (VMM) strategy that has served the company well in promoting its flavour of SystemVerilog.

One good sign is that the collaborators on the forthcoming VMM-LP book are blue chip. Jadcherla, who joined Synopsys following its acquistion of ArchPro Design Automation, is joined by ARM fellow David Flynn, Yoshio Inoue, chief engineer in the design technology division of Renesas Technology, and Janick Bergeron, Synopsys fellow and moderator of the Verification Guild website. In addition, by the time the book and methodology are made public this autumn, its supporters will be able to point to one more company, Cypress Semiconductor, that has taped out a project using the kinds of strategy outlined in VMM-LP.

VMM-LP makes extensive use of the existing base class libraries for SystemVerilog and other new ones dedicated to low power. By encapsulating tasks and functions, these aid reuse and hold their value as design types scale in complexity. The classes will also continue to be distributed under an Apache 2.0 open source library. That is the pedigree.

The proof, though, will lie in how well the authors and other companies backing this strategy have defined the verification plan. Given the jumble of tasks such a plan must apparently encompass, Jadcherla can be asked whether one has even moved beyond simple verification to a position of overall design management.

"It's more a question of architectural management," he says. "Consider that what you are trying to pull together are the best-practice rules and guidelines that run across writing your test plan, getting the right coverage, using the right assertions, and then actually having metrics that measure the verification. You are also looking at migrating and scaling testbenches. And a lot of this comes down to the modelling of the architecture.

"But there is another way of looking at this. Yes, there are more ways for you to hurt yourself during the design, but at the same time, the type of bug we now encounter is pretty constant. That's not an absolute and every design will always have its own particular mountain to climb. But in a general sense, we haven't seen anything new for a while. And so, pretty much all the failures are control failures.

"But the big thing is that the advent of retention architectures has thrown most native and internal solutions into a complete mess - they just don't work any more. Everything changes every time there's a tweak to the library, you're rewriting huge simulators again and again."

In response, Synopsys and partners are keeping the VMM-LP's pitch as simple as the problem it seeks to solve is complex: "What should you look for, how do you cover it, what are the rules and guidelines and here is how you track your progress," says Balachandran.

Similarly, as the book undergoes its finishing touches, the approach to users is similarly direct. "We've had to write this to go across all those levels of expertise we talked about earlier. No, the engineer straight out of college is not the target, but anyone actively engaged on low power design should be able to take the book and get something of real value, at least an excellent starting point," says Jadcherla.

"One thing that does feel good, that makes us feel this is needed, is that whenever we do meet with the 'savvy' guys, they immediately want to be on the review list."

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close