Hardware's soft answer

Why wait for hardware to get embedded software development underway?

The problem with embedded software development is that you need the hardware on which the code will run to be able to get the job done. Unfortunately, once they get the hardware, equipment builders are running out of time to get the software finished before the units have to ship to customers, now that some electronic products are on the shelves for less than 12 months. This does not sit well with a development lifecycle that can take years.

There are ways to bring development time down. Toshiba took months off its chip-delivery schedule by writing and testing software in parallel with hardware development. Previously, the company would spend more than three months ironing out bugs in the software after the silicon was delivered. By doing much more in parallel, that time was cut to around 40 days.

But how do you run software on hardware that does not actually exist? The answer is to simulate it. Simulation does not, in itself, shave any time off development. But it does make it possible to do more work in parallel with the hardware design -  or even before it. Customers have to have a high degree of confidence in their silicon supplier to ship on time, but they can get access to a description so detailed before the chip hits the fab that the final software can be ready days after the first samples have been packaged.

Mark Burton, head of GreenSocs, a developer of interface protocols for system-level design tools, says: "Every tool vendor will show you how, if you can start software engineering earlier, in parallel with the hardware activity, you will complete your project sooner."

Simulation has become more attractive recently because software itself has become the key to differentiation rather than a specific hardware circuit lying in a chip. Jeroen Leijtens, chief technology officer of Silicon Hive, a supplier of programmable signal processors, says: "In high-end TVs, the manufacturers have a problem in that they have to turn new sets every six months. But they ship in such low volumes that a hardware chip solution doesn't make money anymore.

"They are happy to have the same platform as their competitors because they can differentiate based on algorithms. The tier-one suppliers want to implement in such a way that they can focus on intellectual propery (IP) - the algorithms, not just the design of a chip."

But they do need time to turn those algorithms into software. On subsequent generations of a product, you have the luxury of waiting for the silicon. In the first go-around, software engineers will increasingly have to deal with models.

Virtual platforms

STMicroelectronics has been delivering simulations of its software-programmable chips for several years. Speaking last year at the Design Automation Conference (DAC) in San Diego, Laurent Maillet-Contoz said modelling has proved very useful: "We can deliver an implementation platform to every software developer. And it is also useful for technical marketing: we can ship virtual platforms to customers."

Although some companies have been moving towards virtual platforms for software development for several years, Marc Serughetti, vice president of marketing at Coware, claims that many more are likely to go down that road. "We are seeing this in the wider business of our customers. We think we are at an inflexion point between proof of concept and moving more into production. What people have been mostly doing is low-level firmware development. We are seeing an extension where it covers application integration."

Burton notes that there are several advantages for software developers outside the chipmakers - and who will have little direct contact with the silicon designers - to start using simulations. "Until now, the focus has been on modelling the hardware as accurately as possible: the intention being to provide the software engineer with as close an environment as possible to the real hardware.

"However," Burton continues, "this misses three crucial areas where the productivity of both software and hardware engineers can be increased. First, a model can deliver far greater visibility of hardware components compared with the real hardware. A software engineer can take advantage of this both in terms of debug and in terms of tuning their code.

"Second, a model stresses the software. This is a hugely underused feature. Typically, because the investment in software for systems companies is so high, a software product may live through several generations of hardware platform."

ARM's latest release of its RealView Professional suite is aimed at software developers working on applications where the hardware is not ready. "This allows you to tune the software before you make the commitment to RTL," says Rod Crawford, ARM's product manager for RealView in north America.

The models in the RealView environment are based on four generic emulation baseboards. Models of specific platforms are created using a different tool: System Generator. Vincent Korstanje, product marketing manager for RealView outside north America, added: "This is a complement to custom models. It is a ready-made solution that is less dependent on the hardware produced by the phone and silicon manufacturers and something that is available today, well before the platform is ready."

Burton explains that embedded software needs to take advantage of hardware features that will remain fixed through those generations and not on implementation-specific aspects. "A model can 'stress' the software by performing in ways that are correct but unlike the actually implemented hardware," he says.

"Imagine a pair of processors communicating using a semaphore. It may be that, in the first implementation of the device, one of the processors runs so much faster than the other that the semaphore is always available. In later generations, maybe clock speeds are altered in accordance with load. In this case, the semaphore is very important. If the software engineer had 'ignored' the semaphore in the first generation, their code would not work later. However, even in the first generation, we can write a model that tests this," says Burton.

There is also, he continues, a third advantage to modelling. "The notion of a functional block can be extended to include some software components. In this case, rather than modelling a block of hardware, an entire functional block, including some of the software elements, can be modelled together.

"This has two significant advantages. First, it means that if the functional block is implemented as part-software, part-hardware, that division can be made later in the design flow or, indeed, changed between subsequent revisions of the hardware. Second, such a model is extremely fast."

The chipmakers themselves are looking to pull more of these software-plus-hardware subsystems into their projects, realising that it is too expensive to buy all of the hardware pieces as ready-made IP cores and write the software to tie them together by themselves.

Alex Haggenmiller, senior manager in Infineon Technologies computer-aided design department, says: "The next step for reuse has to be the IP system itself. In the last decade, we concentrated just on the hardware element." He notes that models are essential to allow the importing of complete subsystems: "We need methodologies for integrating IP into the system and we need very fast C models to answer the question: does the system work as planned?"

If IP hardware blocks did not come with software, he says, it would be a stumbling block - although it might not be a showstopper. If the block came with a definition of registers and other software-accessible ports, perhaps using the formats defined by the Spirit IP-Xact standards, that might be processed to at least build C header files. "And, with C models of the hardware, you can start software development early so the delay to get software is not so big."

Bob Maaraoui, a senior engineer with Texas Instruments, says: "It is more complex to accept IP with software. So, we are asking for C models to allow us to start system and software work way before the hardware is ready."

Typically, the models that the chipmakers develop are built using the SystemC language. This is basically C++ fitted with a class library containing extensions that support hardware-oriented concepts such as timing and parallelism. Some regard the language as clumsy and inefficient. But it is practically the de facto environment for building models that can be delivered to programmers, although ARM has taken a different approach with its RealView simulator. This takes the form of a just-in-time compiler - similar to that used by Java runtimes - to convert ARM instructions into native workstation code. Korstanje said this lets the simulation run at up to 200MHz or so on a typical development PC. SystemC simulations tend to run much more slowly as they have to model the hardware expressed in SystemC accurately.

Some vendors have tuned their simulation environments to only call on the SystemC element where necessary. Memory accesses are so well understood that modelling them in SystemC is pointless. It makes more sense to emulate these within the instruction-set simulator. This is what Coware, among others, does with its simulation environment.

Performance will stay the area of focus for the next few years as developers struggle between the trade-off of hardware accuracy against cycles-per-second. But, increasingly, embedded systems programmers will be spending a lot more in a software-only world before going to the in-circuit emulator.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close