Block-based chip design

Lessons for IP reuse

Design with ready-made IP cores should be easier but life is never quite that simple.

In the ten years or so that it has taken for design reuse, or design with IP, to go from a niche interest to mainstream chip implementation, teams have developed an understanding of what it can do for them, and how to budget accordingly. However, there are still surprises to be negotiated, as some aspects of design reuse are counter-intuitive, especially when it comes to reusing some piece of IP developed in-house in another chip. Designers found this out at the end of last years as they converged on the IP-ESC event in Grenoble, France - the main conference for IP-based design in Europe.

For starters, there is a design-time cost just in integrating the IP, which Andrea Fortunato, director of professional services at design-analytics specialist Numetrics, estimates at a minimum of 10 per cent.

'There is a high underestimated effort,' Fortunato warns. 'The percentage of effort saved from a block that is 50 per cent reused is almost zero. We see a sharp decline on the effort as we reuse a higher percentage of the block and it hits the range of maximum benefit at 80 per cent block reuse.'

Fortunato adds: 'IP reuse does help. We see a strong statistical correlation between the amount of reuse and decreasing cycle time and fewer respins. But IP-leverage miscalculations are common.'

The figures for Numetrics' cost-estimate comes from a database of information from real-world projects the company has built up. 'A model has to be based on facts and data and there is no better data than from your historical projects,' says Fortunato.

Numetrics' statistics have shown the benefits of design reuse are highly sensitive to the amount of editing that a downstream team does on a particular core - whether that rework is necessary or not.

One of the warnings from the early days of IP-based implementation was that redesign should be avoided if possible: treat the cores as black boxes and do any customisation external to them. It sounded like good advice then, so you have to wonder why so many design teams spend their time cracking open the black box to see how it works.

Incompatible usage

Very often, the reason is to get the IP working in a larger chip design. Although a core may pass a variety of tests, it can fail when used in a system because the blocks that use it make requests that were unanticipated or are incompatible due a different reading of the standard to which the core was designed. It's partly an issue of quality and partly one of an inability among core designers to take all possible use-cases into account. The level of documentation and support expected from third-party suppliers is not always present when blocks from other teams within a company are packaged for reuse.

Francois Remond of STMicroelectronics says: 'Debugging IP at the SoC level is very costly. The SoC team does not have a deep understanding of the functionality and the potential problems. If you integrate a piece of IP that doesn't work, you are trapped. Imagine a video circuit with a DDR interface that is not working. You can't continue.'

A further issue is performance tuning. In order to improve power consumption or speed, integrators like to make changes to make the bought-in core more responsive, taking out logic paths that slow the part down or draw too much power. This calls for a more intensive understanding of the core and raises the risk of making changes that are incompatible with otherwise untouched parts of the core's logic.

In 1999, other than ARM and some of the other reasonably experienced suppliers, it was hard to tell who had good quality cores and who had a bundle of bugs held by a hardware-description language (HDL) wrapper.

Phil Dworsky, director of strategic alliances at Synopsys, says: 'I can't tell you the number of people I met who thought because they had designed a chip they could be an IP vendor. They quickly found out it was not that simple.'

Today, it's much easier to tell who has the quality cores: there are the people who are still in business and those who aren't.

The difference between the two is frequently down to whether their cores worked or not when inserted into a chip design. The story of the set-top box chip that failed because of one of the IP cores inside it is now a legend in the industry - the supplier pretty soon got out of the IP business.

Integration quality

'The IP vendors who are still in business are all about the rigorousness of the process,' says Kathryn Kranen, president of verification tools supplier Jasper Design Automation.

Even with the attention to detail that the surviving IP vendors need, there is still a quality issue.

'We need to move to the next level, which is integration,' says Kunkel.

Quality is not so much about whether the core works but whether it works in a target system. A core might work to the letter of the spec but fail within the context of a full chip where it's not possible to implement the spec as written.

'One of the things we are hearing from customers is that the typical thinking used to be 'I am going to verify to the legal input spec,' says Kranen. But that's not enough. 'Even for internal block development there is great value in verifying with very much looser constraints - so that you can harden your block against changes in logic around it or other vendors' IP.'

Olivier Haller, manager of the design verification team at ST, says the company is introducing concepts such as functional qualification, where tools run tests based on protocols against each piece of IP to find out how well it copes.

Formal verification

Kranen's suggestion, which is not unusual for a supplier of formal verification, is to use assertions and formal techniques to check whether a bad transaction can confuse an IP block, or whether it will sail through with nothing more than an error signal.

According to Kranen, formal verification has even been used as a support tool. She cites a situation where a customer asked ARM about an apparently aberrant logic trace they found when simulating the core inside their proposed system. ARM support asked for the sequence of events that led up to the odd output trace but the customer wanted to keep that confidential - ARM may call its customers 'partners' but the trust that the word implies isn't often there. So, the ARM engineers used the Jasper tool to work out if the core could get into that state and what it would take to get there.

'It's a tool that provides answers to specific reuse questions,' said Kranen.

Haller would like to go further, embedding knowledge about how the core should be used into the hardware description, 'to ensure the automatic detection of bad integration', he says.

In principle, this would involve the use of assertions - checking code embedded in the hardware description to catch errors and describe what happened to trigger them. 'We expect knowledge that will help you to integrate. The more assertions there are in the design, the closer we are to the bug,' says Haller.

In the early days, IP vendors used to claim their IP was good because it was 'silicon proven' - that it had made it out of the other end of a fab and worked in a test system. But Haller says this is not enough.

'Silicon proven does not mean anything. How you will tune the core in one design is very different from the other usage. We should work well together by using qualification and verification techniques. We can't afford to have the IPs in silicon to prove them. We want to qualify IP without running to silicon.'

Although IP has become a critical component of an increasingly automated design flow, it is still a long way from being a plug-in-and-go black box.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close