The rise of heterogeneous devices and the people problem

Smaller, faster, better has been the mantra across all fields of electronic engineering for decades. These days that has gone even further – smaller, faster, better, but all in one device, with all engineering disciplines working as one unit.

Heterogeneous systems - systems comprising of diverse parts, have been a part of electronic engineering since the dawn of the Integrated Circuit.  After the CPU, along came the microcontroller and really ruffled feathers – a software system but likely doing low-level hardware interaction in real-time.  Should a hardware engineering department be responsible for it, or should the software dept.?  Heck, for complex software we could just add a dedicated microprocessor too.  Whilst we’re at it, high-speed logic can be done in a Field Programmable Gate Array (FPGA).  Radio Frequency communication as well?  We’ll need an RF section.  Marketing would like an AI angle too.

These different engineering worlds are now being integrated together into a single device.  We are edging towards the day we just apply power to a wonder-chip.  (We’ll never quite get there, but it won’t stop people trying.)  Therein lies one of the biggest difficulties - how to develop and debug an entire system in a single chip.  The technology may not be the problem…

It all seems like a natural progression of technology – cramming more into an ever-more capable device.  Transistor sizes shrink, silicon synthesis becomes more turnkey and you get more device ‘bang for your buck’.  What can’t shrink is the expertise required to develop on such a device.

The team of specialists

Over the years I’ve been involved in rack-based systems designed by a group of engineers, all of which have designed and proven their own contributions.  As far as they are concerned “their bit works”, but initially the system as a whole did not.  The integration and test engineers wait patiently for one of the developers to do something.  Everyone knows that the person who decides to investigate will be lumbered with this laborious, but essential task.  The poor soul who takes this on also knows they will have to delve into other engineering disciplines, outside their comfort zone, to trace the problem through.  Toes may get trodden on, code which was never meant to be seen gets picked apart, untouchable historic designs get questioned - all in the name of product development.  This is a time for transparency, not ego.

At least with a physically large design you can probe for signals and track events on various pieces of test equipment.  Now that it is going to be under the bonnet of one chip, it’s a different game.

Your favourite tool may be supplanted by a tool common to the multiple technologies inside a device.  Xilinx Zynq (Arm cores and FPGA fabric) devices have two debug ports to allow individual debugging of either the Processor Section or the Programmable Logic, deliberately avoiding the other.  This can be a great relief to the traditional design departments and their knowledge with their respective development and debug tools.  On Zynq it is also possible to chain these ports into one so tools that are aware of both worlds can give greater insight.  Other devices may only offer specific porthole into their device.  Vendors will naturally offer a toolset to work with this, but it may be different to what people are used to.  Suddenly, this new wonder-device to solve everyone’s design problems is starting to upset the apple cart of engineers. Not just in one discipline, but in all the technologies it encompasses.

Demarcation in larger companies can start to throw up barriers between the key developers.  A technological step up in device integration can actually end up reducing design group productiveness.  People and departments like their own ways of working and don’t necessarily want to play nicely with outsiders wanting to poke around.  Sometimes a highly integrated device is just a little too different for people’s liking.

One approach, which is far from ideal, but helps – is to increase the visibility of some of the internals of the various sections.  Application processors could be logging more, real-time processors can show events occurring on IO or serial lines, FPGA logic can flag up certain states and events.  With much more now handled internally on a heterogeneous device, this will hopefully free-up what would have been various interconnecting busses on the device pins.  By making it easier for someone without specialist tools and knowledge to have an insight when something overall goes awry, a system problem can be directed to the appropriate developer.


Can a single engineer handle a heterogeneous design?

To some degree, yes.  If a person understands the system intention and the expected operation, then it is ‘just’ the massive undertaking of designing with all the different technologies.  Silicon vendors know this and provide a range of tools to help where they can.  After all, it is in their interest to help you use their chip leading to more sales.  These alone have their own learning curves and may not integrate fully in your usual development flow.

There is a risk as more a tool automates a task for you, the less is understood of what is truly going on.  Trying to manually configure and control your system would be a massive and horrendous undertaking.  After all, there is more going on inside these complex devices than what your platform will need.  Wizards and auto-gen tools are a great productivity boost, up until the point that something goes wrong and you must dig around in what something else has created for you.  Leaving it all to the computer to handle may not be a great idea, but it can certainly help you on your way.

Creating a heterogeneous system with minimal people can be done.  The trick is to perform good engineering in all areas.  Easier said than done, and harder to explain to a project manager.  Progress will be incredibly slow as a problem in one area holds the project up, putting pressure on timescales.  There will certainly be an aspect of Jack of All Trades, Master of None.  Just don’t expect fanciful and clever designs to emerge.  In this type of system, simpler is almost certainly better.  There will be an awful lot to do, just to get the basics in all the required areas up and running.

Engineers must be consulted in the early phases of architecting a design who will be implementing all of this.  It can’t be left to project managers who see a heterogeneous device as something of a short-cut.  I’ve seen FPGA design treated as software, and even witnessed people shuffled between the two departments in an aim for a project plan to hit its targets.  On the surface, I can see how the comparison can be made.  Both disciplines spend a long time writing code, developing code tests, maintaining code repositories, performing simulation/emulation and debugging.  By the time new hardware is available, they can both have a lot of the system ready.  They are fundamentally different types of engineering though - that happen to share similar top-level view practices.  With the culture of software being continually updated and deployed, the same shouldn’t be expected of FPGAs.  The FPGA sales headline of “reconfigurable hardware” shouldn’t be interpreted as “can sort this out afterwards.”  The worst thing of all would be to deliberately plan to have a hardware design with an empty FPGA, assuming that whatever is wanted will fit and meet timing constraints.


Security: the alleged barrier to progress

The pressure of even getting signs of life out of a new system can be great.  It is at this point there is a huge temptation to turn off or bypass security features (which, of course, have been designed in from the start.)  Security just seems to get in the way of the simplest things.  Although it feels like more effort to work within the security restrictions, we all know it is the correct thing to do.  We also know if we circumvent things now, it’ll never be put right afterwards.  Being human, we’ve all opted for the easy route at least once.

A downside to heterogeneous devices is their ‘attack surface’ can be larger with more potential entry points for an attacker, even though the device is physically smaller.  This is especially notable when stepping-up to a complex system rather than squeezing an already complex system into a smaller space.  Rather than one type of engineering design at play, there are now many.  Real collaboration is required rather than purely meeting specification.  As with any large system, one weak point can be an entry point into the other areas.  With many of the interfaces being inwards facing it is all too easy to assume it has a degree of security by not being accessible outside. In a design where a few people are juggling multiple technologies, there is always the risk that not all the holes have been plugged.  Or worse, assuming ‘the guy at the other end handles security’.


“I’ve done something on a Raspberry Pi, that wasn’t so hard.”

Adding a rich OS, such as GNU/Linux into a system, just isn’t the same as bolting-together a Raspberry Pi with something.  It is fantastic there is a growing maker community with core systems such as Raspberry Pi and Arduino platforms, along with complimentary kits such as ShieldBuddy.  For a long time hobby electronics had been diminishing since the interesting components are too small to be practically built with at home.  Now, a new market has emerged of add-on ‘shields’ and ‘hats’ to address this.  In one sense, this is great news – enabling a new generation of engineer.  In another sense, it may be glossing over some of the complexities and subtleties of a real design.  Raspbian, the main go-to Raspberry Pi operating system, is more akin to a desktop computer experience.  Even without operating the desktop, there is a fully-fledged software repository available to help people on their way.  With a single command line, a complete webserver can be downloaded, installed and started.

Embedded Linux is a different world and there are plenty of books dedicated to it.  For each topic these books cover, another book somewhere delves into its detail.   As a rule of thumb, if you can’t already do everything you want from a command line, it’s not for you.  Very little is instantly available and everything has to be built from the ground up.  Great strides have been made to automate the building of custom embedded Linux distributions with the likes of the Yocto Project.  This, in true Linux style, gets to build itself and the tools required before building for the target you want.  Silicon vendors normally can give you a step-up in trying to build Linux for their device, and maybe offer a pre-built image to boot from.  Since they can’t offer everything in one go, you will almost certainly need to modify the build for your own needs.  It’s amazing how many command-line tools you take for granted don’t show up by default.  Don’t be fooled into thinking a move from a Raspberry Pi to another platform will be straightforward.

Even the world of classic embedded microcontrollers and embedded Linux are poles apart, considering they are both software disciplines.  Tools and techniques are available for each, but they differ.  Even worse, the GNU/Linux world evolves – as software packages almost go in and out of fashion. What was seen as the way approaching a problem at one time may have fallen by the wayside in later years.

It can be a full-time job just keeping up with what is happening in the Linux world.  The same can be said for web technologies.  The World Wide Web has changed beyond recognition since its inception.  If your platform has network connectivity, then it will almost certainly want to use a back-end service based on an Internet technology.  At least electronics is constrained by the laws of Physics whereas software, by its very nature, will continue to change.

There are other areas engineers will still not be able to do well.  We are living in a world where a humble room thermostat now requires a full-colour display amongst other things.  Engineers are renowned for their highly functional and capable systems.  They must, however, be kept away from anything graphical otherwise it will certainly look like it was designed by an engineer.

Enter the artists – graphic designers, musicians, animators and more.  With such high expectations on the user experience these days, these people are at the fore-front of how your product is perceived.  No matter how technically brilliant your system may be, there will always be someone to complain about the colour and brightness of your LEDs. 

Normally engineering and artistry are kept well apart, with the occasional curious crossover.  I wouldn’t expect an engineer to create audio and imagery in the same way I wouldn’t expect an artist to compile API calls.  If the product presentation is going to be key, then a good multimedia framework is going to be essential, along with the tools to bridge art and code.  Qt is one such framework with tools that can import from Adobe Photoshop.

In one particular project, I architected a product in multiple ways.  The project needed a graphical display, webserver other Ethernet, a small local filesystem and specific real-time capabilities.  One architecture was to use a single powerful Cortex-M microcontroller.  With an RTOS and extensive Keil middleware blocks, all the requirements could be ticked.  It would be lean, compact and have a low BoM cost.  Developing the top-level graphical application would be done by the customer.  In this situation the customer would either have to rebuild the entire system each time, or perhaps have their application in a mini filesystem, or maybe divide up flash memory.  Either way in this architecture, it was starting to become restrictive for them as a platform.  An alternative heterogeneous architecture of Cortex-M for real-time operations and Cortex-A for Linux together was more palatable. Their top-level application could now be developed separately on a desktop PC and graphics could easily be changed and uploaded.  Brought-in Web developers have something familiar to work with, rather than a bespoke offering.  With both Cortex-A and -M together being available in a single device, the PCB size was kept to a minimum.

A heterogeneous device may even be wasteful in terms of features, but you gain in terms of greater flexibility in development, architecture, and at times, cost.  In fact, it is quite rare to utilise every section of a complicated chip.  Projects which do try and squeeze every last drop suffer from having no wriggle-room for when problems arise.

Close, but not quite there

My main TV suffers from one of my biggest gripes – (poor) software-controlled volume.  It has multiple modes: regular TV, external sources, media player, internet connectivity etc. – all of which have animated full-colour icons.  It looks very smart, but I know in certain modes I need to turn the volume down.  Except, in certain phases like running animation or switching modes, I can’t due to the volume handler not operating at that time.  The TV starts blaring out and then I can start to turn the volume down, in small steps at that.  All the developers have done their bit, and even included artistic elements, to specification I’m sure.  For all the smart, integrated features (which are now already obsolete and don’t contribute), it taints my user experience.

So, what can be done for a successful heterogeneous project?

  • Get a good group of people together where possible. Not just good engineers, good people.  They ought to be overlapping to get a better sense of things.  No engineering rock stars though, this is going to be a team sport.
  • Initially create a functional platform along with any key multimedia elements. New features as well as shinier bells and whistles can be added but concentrate on the core product first.
  • As with all designs, avoid specification creep. As soon as Linux or FPGA is mentioned, people start dreaming up more fanciful ideas.  It is important not to lose sight of the application for the sake of unnecessary features.
  • Keep it constrained to ‘your’ platform. Trying to add in too many future options means you will never get a good platform covering the essentials.  You can’t hardware test interfaces that don’t yet operate.
  • Although it is nice to have a range of ideas from people, limit them. A dedicated technical project leader is needed to help steer the project in a clear direction.
  • Make sure you have the tools to handle the different sections, preferably as a whole. For a mix of Cortex-A and Cortex-M work, Arm Developer Studio is available for trial with a 90 day licence.

Luckily for me, our team at Hitex has been growing and evolving into this multi-faceted machine for some time, and as an engineer I can really see the benefit. Hardware, software, project managers, senior developers, junior engineers with a huge thirst for learning (and often new and exciting ideas!) and dare I say it – marketing, all working towards a final goal.

With all the current changes to our working environment, now is also a great time to have a rethink on how heterogenous device development could bring real change and innovation across the board. Hetergegeous devices are here to stay and their rise will only continue as demand grows. What’s next? Who knows, but I know that when our team are asked to think outside the box, to get more into a box – we will. Including all the shiny bells and whistles. Now where has the TV remote gone this time…

What's your next challenge? Do you have an idea for a new design? Or do you simply want to pick our brains and see where the conversation goes? Reach out to us at Hitex on 024 7669 2066 or drop us a line at sales@hitex.co.uk 

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles