Lessons for IP reuse
Design with ready-made IP cores should be easier but life is never quite that simple.
In the ten years or so that it has taken for design reuse, or design with IP, to go from a niche interest to mainstream chip implementation, teams have developed an understanding of what it can do for them, and how to budget accordingly. However, there are still surprises to be negotiated, as some aspects of design reuse are counter-intuitive, especially when it comes to reusing some piece of IP developed in-house in another chip. Designers found this out at the end of last years as they converged on the IP-ESC event in Grenoble, France - the main conference for IP-based design in Europe.
For starters, there is a design-time cost just in integrating the IP, which Andrea Fortunato, director of professional services at design-analytics specialist Numetrics, estimates at a minimum of 10 per cent.
'There is a high underestimated effort,' Fortunato warns. 'The percentage of effort saved from a block that is 50 per cent reused is almost zero. We see a sharp decline on the effort as we reuse a higher percentage of the block and it hits the range of maximum benefit at 80 per cent block reuse.'
Fortunato adds: 'IP reuse does help. We see a strong statistical correlation between the amount of reuse and decreasing cycle time and fewer respins. But IP-leverage miscalculations are common.'
The figures for Numetrics' cost-estimate comes from a database of information from real-world projects the company has built up. 'A model has to be based on facts and data and there is no better data than from your historical projects,' says Fortunato.
Numetrics' statistics have shown the benefits of design reuse are highly sensitive to the amount of editing that a downstream team does on a particular core - whether that rework is necessary or not.
One of the warnings from the early days of IP-based implementation was that redesign should be avoided if possible: treat the cores as black boxes and do any customisation external to them. It sounded like good advice then, so you have to wonder why so many design teams spend their time cracking open the black box to see how it works.
Very often, the reason is to get the IP working in a larger chip design. Although a core may pass a variety of tests, it can fail when used in a system because the blocks that use it make requests that were unanticipated or are incompatible due a different reading of the standard to which the core was designed. It's partly an issue of quality and partly one of an inability among core designers to take all possible use-cases into account. The level of documentation and support expected from third-party suppliers is not always present when blocks from other teams within a company are packaged for reuse.
Francois Remond of STMicroelectronics says: 'Debugging IP at the SoC level is very costly. The SoC team does not have a deep understanding of the functionality and the potential problems. If you integrate a piece of IP that doesn't work, you are trapped. Imagine a video circuit with a DDR interface that is not working. You can't continue.'
A further issue is performance tuning. In order to improve power consumption or speed, integrators like to make changes to make the bought-in core more responsive, taking out logic paths that slow the part down or draw too much power. This calls for a more intensive understanding of the core and raises the risk of making changes that are incompatible with otherwise untouched parts of the core's logic.
In 1999, other than ARM and some of the other reasonably experienced suppliers, it was hard to tell who had good quality cores and who had a bundle of bugs held by a hardware-description language (HDL) wrapper.
Phil Dworsky, director of strategic alliances at Synopsys, says: 'I can't tell you the number of people I met who thought because they had designed a chip they could be an IP vendor. They quickly found out it was not that simple.'
Today, it's much easier to tell who has the quality cores: there are the people who are still in business and those who aren't.
The difference between the two is frequently down to whether their cores worked or not when inserted into a chip design. The story of the set-top box chip that failed because of one of the IP cores inside it is now a legend in the industry - the supplier pretty soon got out of the IP business.
'The IP vendors who are still in business are all about the rigorousness of the process,' says Kathryn Kranen, president of verification tools supplier Jasper Design Automation.
Even with the attention to detail that the surviving IP vendors need, there is still a quality issue.
'We need to move to the next level, which is integration,' says Kunkel.
Quality is not so much about whether the core works but whether it works in a target system. A core might work to the letter of the spec but fail within the context of a full chip where it's not possible to implement the spec as written.
'One of the things we are hearing from customers is that the typical thinking used to be 'I am going to verify to the legal input spec,' says Kranen. But that's not enough. 'Even for internal block development there is great value in verifying with very much looser constraints - so that you can harden your block against changes in logic around it or other vendors' IP.'
Olivier Haller, manager of the design verification team at ST, says the company is introducing concepts such as functional qualification, where tools run tests based on protocols against each piece of IP to find out how well it copes.
Kranen's suggestion, which is not unusual for a supplier of formal verification, is to use assertions and formal techniques to check whether a bad transaction can confuse an IP block, or whether it will sail through with nothing more than an error signal.
According to Kranen, formal verification has even been used as a support tool. She cites a situation where a customer asked ARM about an apparently aberrant logic trace they found when simulating the core inside their proposed system. ARM support asked for the sequence of events that led up to the odd output trace but the customer wanted to keep that confidential - ARM may call its customers 'partners' but the trust that the word implies isn't often there. So, the ARM engineers used the Jasper tool to work out if the core could get into that state and what it would take to get there.
'It's a tool that provides answers to specific reuse questions,' said Kranen.
Haller would like to go further, embedding knowledge about how the core should be used into the hardware description, 'to ensure the automatic detection of bad integration', he says.
In principle, this would involve the use of assertions - checking code embedded in the hardware description to catch errors and describe what happened to trigger them. 'We expect knowledge that will help you to integrate. The more assertions there are in the design, the closer we are to the bug,' says Haller.
In the early days, IP vendors used to claim their IP was good because it was 'silicon proven' - that it had made it out of the other end of a fab and worked in a test system. But Haller says this is not enough.
'Silicon proven does not mean anything. How you will tune the core in one design is very different from the other usage. We should work well together by using qualification and verification techniques. We can't afford to have the IPs in silicon to prove them. We want to qualify IP without running to silicon.'
Although IP has become a critical component of an increasingly automated design flow, it is still a long way from being a plug-in-and-go black box.
To try to deal with the problems raised by intra-company reuse, STMicrolectronics has created its own dedicated IP suppliers. 'We separated the SoC team from the IP team,' says Francois Remond of ST, primarily to make the IP more robust. A danger with reusing blocks that were never designed for the purpose is that shortcuts taken on the original project don't show until too late on subsequent designs.
'We had recently the experience of transforming an adult RTL block [developed internally] into IP. It has a high cost,' says Remond. 'It is better to start with reuse in mind.'
IP-based design has come a long way for ST. A recent design was a set-top box chip designed for a 55nm process. The 209-million transistor chip contains more than 50 IP cores that were supplied as RTL code with a further 160 delivered as hard IP - that is, ready-made layouts for the target process - and close to 500 memory blocks.
In the generations that followed the 2002 design of a less complex chip for a 130nm process, Remond claimed productivity at ST on these projects has increased four-fold. 'At the same time, we have reduced the number of bugs inside the IP by a factor of two.'
One advantage of IP-centric design is that you can prototype early, often without any of the target hardware being available. This is where system modelling, often using SystemC to describe the system, has come into its own.
Big players such as Intel and Synopsys have been busily snapping up model providers with the aim of building prototyping environments that are available off-the-shelf on the basis that many SoCs have similar elements, such as ARM processors and protocol-handling cores for USB, Ethernet, Wi-Fi and so on.
Volkan Esan of Infineon Technologies described at the recent IP-ESC event in Grenoble, France how the German chipmaker is using SystemC in its modelling efforts and where the design teams plan to go.
The introduction almost two years ago of version 2 of the Transaction Level Modelling (TLM) standard, which provides a standard way for models of IP cores to talk to each other, has greatly eased integration, Esan says.
However, debug remains a problem as it demands that engineers know the internal structure of each model. When these models come from third parties, that is unlikely to be the case.
The upcoming Configuration Control and Inspection (CCI) standard may make life easier. Markus Willems, a member of the Open SystemC Initiative board, says: 'The motivation for CCI is to enable an ecosystem between users, tool suppliers and model developers. TLM specified how models talk to each other. The CCI working group is coming from a different angle: how models from different sources work in different tool environments. For example, how you display and control model parameters and trace signals from the models as they execute.
'It will let the users interact more easily with the models. It is an ambitious goal but necessary for the widespread use of SystemC.'
The first step, says Willems, is to standardise a programming interface between the tools and the models and then to move on to methods to access registers and model state without delving deep inside the code with a source-level debugger. Future work may make it possible to collect power and performance statistics.
Users such as Infineon are progressing on other fronts. Esan says the key benefit of high-level modelling is the ability to run hardware and software development in parallel. But the teams would like to take modelling further. 'We need to start at a higher abstraction level and perform concept validation, where we can explore use-cases.'
'With early software development, where we already have a lot of activity in the company, we already have a definition of the hardware-software interface. We have a full memory address map. Peripherals are modelled to the extent that you can write software for them.
'So we have a virtual prototype that is very close to the product to be sold. But how do we know it is correct?' Esan asks.
Instead of delaying system verification until the real hardware description - probably written in SystemVerilog - is ready, Infineon wants to pull some of the features found in these languages for automated verification into SystemC. The company is a lead member of a German-government funded research project called Sanitas that will look a new verification techniques for SystemC.
The idea of using SystemC as a verification language is anathema to a number of engineers who specialises in hardware verification, as there are languages such as 'e' and SystemVerilog purpose-built for the job. But Esan says the main aim for Infineon's use of SystemC verification extensions is to check the design at the concept stage.
'SystemC verification will always be complementary to other techniques. They are not competing,' Esan claims.
|To start a discussion topic about this article, please log in or register.|
"Is augmented reality the next big thing or a marketing gimmick? Is it fundamental to the future or a fashion faux pas?"
- Define Energy. [11:36 am 21/05/13]
- Tap Off Unit for riser cabling system [11:27 am 21/05/13]
- 3 LANE ROADS [10:23 am 21/05/13]
- Marketing from Engineers' perspective [09:49 am 20/05/13]
- Fukushima Daiichi Unit 3 5th Floor Highly Radioactive Debris [03:09 pm 17/05/13]
Tune into our latest podcast