The embedded computers used to control industrial machinery are starting to talk among themselves – and it's a process that could change the way we program managing systems.
When the term 'embedded system' was first coined it was meant to distinguish the computers that go into machines from the desktop and office computers that dominated the market at the time. They were to be hidden away from view, designed not to be noticed. But now embedded systems are coming out of their shells.
Some pundits call it Industrial Revolution 2.0, but don't worry, machine tools are not going to be hitting the 'Like' button on Facebook anytime soon, although tweets that advertise their need for an urgent service might appear. The real revolution will be hidden from view, but it is one that mirrors many of the existing trends in IT in which machine intelligence moves out from PC-class hardware to smaller devices at the edge and, at the same time, into the cloud.
Talking about the emerging generation of IT-based industrial control systems, Tony Partow, senior business manager at embedded chipmaker Maxim Integrated, says: "In the past there was a central controller responsible for making decisions - but communicating with the central controller introduces a bottleneck." The machines are getting smarter and the number of sensors they use are vastly increasing in number. By moving processing down to the machine you can shorten the control loops and increase productivity.
The Industrial Revolution 2.0 (sometimes known as the 'Technological Revolution') trendis but one part of a larger shift that encompasses a bunch of more-recently coined phrases, such as the machine-to-machine (M2M) communication, the Internet of Things (IoT), and Cyber-Physical Systems (CPSs). The ideas behind them are not new: they emerged in the late 1990s under a different heading: ambient intelligence. However, a recent report by National Instruments (NI) named CPSs as one of the key emerging technology trends that will accelerate productivity in industries such as automotive and aerospace.
ARM CTO Mike Muller sees the IoT as a major market that will change the way that companies with physical infrastructure will use IT. He cited the example of SFPark. As part of a trial, SFPark, part of the San Francisco's municipal transportation authority, is deploying sensors across several thousand on-street and garage-based parking bays. They will detect whether or not a car is parked and relay that data in real-time to the company's servers, which in turn feed information on the closest parking bays to drivers using a smartphone app.
Vehicle parking management is proving something of a role-model for sensor-based M2M applications. A similar smart project is underway in the Spanish city of Santander, using technology provisioned by Telef'nica. This involves embedding magnetic sensors able to detect a car parked over them in parking bays, with the aim of reducing traffic congestion - on the basis that a lot of stop-start urban congestion is down to drivers circling the city looking for somewhere to stop for a while.
Street-parking sensors and apps
A pilot street-parking scheme in Birmingham's Jewellery Quarter was launched last year, with the aim of helping motorists find available spaces and relieving traffic congestion. Amey, the city's road maintenance contractor, partnered with Birmingham City Council to embed around 200 small sensors to identify the presence of a parked vehicle; this data is then presented via the 'Parker' smartphone app or website, which motorists can check in advance or in real-time.
SFPark, meanwhile, has a commercial reason underpinning it: it intends to use the real-time data to perform demand-based pricing to a finer degree than it is doing already. When popular events are on in central San Francisco expect the parking fees there to increase, while garages that are further away might see falls.
"You can change the way you make money out of a car park - it's about traditional businesses and the technology that could help make money out of their assets," says ARM's Muller, arguing that much of the growth in the IoT will be not so much in consumer-focused applications or large projects, such as the smart-grid rollout, but less publicly visible areas in the middle ground. "The 'I' might not always stand for 'Internet'."
Kaivan Karimi, executive director for global microcontroller strategy and business development at Freescale Semiconductor, says applications for scattered sensors will appear in sectors such as agriculture: "Do you need to water every day? Sensors will tell you whether you need to water or not."
Civil engineering represents another area ripe for sensor proliferation, Karimi says: "How many bridges collapsed in the summer because of the ageing infrastructure? You can put tiny seismic sensors on the bridges and have them report back on their condition to see whether the structural integrity is intact. Other environmental sensors can say whether it is freezing and snowing so that you can send a snowplough.
"The cost of these little sensors along with a microcontroller and short-range connectivity is less than $2. Why would you not spend that?" Karimi asks. "The economics could be tremendous. We've done the math on areas such as logistics. If you put sensors in milk vats you can monitor fluctuation in the milk temperature. Even if temperatures stay within the safe zone, there is a correlation in the shelf-life as the milk gets warmer. You could use sensors in the container to smell the methane gas produced by the milk as it warms. If they smell it the system can say: 'This has to go to Cambridge because it won't make it to East Kilbride'."
At the Design Automation Conference (DAC) 2013 in Austin, US, Professor Alberto Sangiovanni-Vincentelli of the University of California at Berkeley, claimed that: "The world is going to be instrumented. The world will become a gigantic sensing machine" - and it's going to be intelligent.
"You no longer use a single device to perform a function. It is a collection of devices coming together that give you a particular function," adds Prof Sangiovanni-Vincentelli. "The function is determined by the availability of what? By sensing, actuating, computing, storage and energy. What we have is a humongous network, distributed, adaptable, hierarchical, hybrid control system. That's a very complicated problem to solve."
Traditional software methods are unlikely to cope with this highly distributed architecture. Prof Sangiovanni-Vincentelli argues: "Take the famous 'V' diagram used in software development. It's an outrageous method. Why? Because it's sequential. You can get to the end and find out that the system does not work so you have to start again."
Hardware driving software?
Prof Sangiovanni-Vincentelli highlights the techniques used in chip design as potentially pointing the way towards better software-development practices that will be suitable for highly distributed systems. Although IC development remains largely sequential, hardware designers have adopted a number of techniques that deal with the massive complexity of designing chips that have multiple cooperating engines all operating simultaneously. These techniques significantly constrain the designer's choices.
"You don't let the designer do what he wants. Exploring design to its limits is an extremely bad idea," argues Prof Sangiovanni-Vincentelli. "Methodology is to be free of choice. Complexity is fought by decomposition and abstraction." He points to the use of platform-based design by hardware engineers, in which pre-designed components are assembled rather than designed from scratch. "You design by composing parts. So that when you put them together you get something that works."
Another technique that Prof Sangiovanni-Vincentelli favours is 'design by contract', although this has yet to become widely adopted even in hardware engineering. The Eiffel language, developed by Bertrand Meyer in the 1980s, was an attempt at this approach. It uses the idea of contracts to check input and outputs to functions to avoid clumsy errors from causing crashes or odd behaviour. "When you put complex systems together, you need to make the communications very explicit," says Prof Sangiovanni-Vincentelli.
Professor Edward Lee of the University of California at Berkeley reckons that software development needs to go further, and fill-in a vital element that has been missing for some 50 years: the concept of time.
"By choice, the instruction set architectures used by almost all processors have no temporal semantics. It was a choice made by IBM in the 1960s when designing the System/360 mainframe computer. You didn't need timing semantics to run payroll. But [in mainstream computing] that approach hasn't been rethought since. As a result, there is a layer of abstraction that all software uses that is hiding the underlying timing," Prof Lee explained in a lecture at DAC. "Timing for computers these days is a performance metric not a correctness metric."
Prof Lee says the single-threaded program is a useful deterministic model of what should happen in a system. Everything happens in strict sequence. Even though processors may re-order events to improve performance, they guarantee that the results appear in the order specified by the programmer. That changes radically when programs are multi-threaded so that they may run on different cores simultaneously. Suddenly, events can happen in any order.
"Anyone who has written multi-threaded code knows the penalty of giving up that determinism," says Prof Lee. Even single-processor code suffers. Pointing to a fragment of code written to control the timing of events, Prof Lee notes that the commands don't appear in the main body of the program. They are called upon when the processor receives an interruption - entirely asynchronously to the core program.
"Where is the timing in this model?" Prof Lee asks. "It's not there. It only turns up in the physical realisation. The same program will behave differently on a different board. Even on the same board the program may behave differently in another environment. In effect, the designer is stuck working without a model. One consequence is that when timing affects system behaviour, the design becomes brittle."
Prof Lee claims this brittleness leads companies such as Boeing to stockpile all the processors it may need to build and service aircraft over their entire operational lifetime. They are stored under nitrogen to prevent them degrading because of the problems that may be caused by fitting a processor that is only similar, not the same.
If unpredictable timing interactions between hardware and software lead to subtle bugs in single-processor systems, the situation is likely to be a lot worse in highly distributed environments conceived for the IoT, where code may be activated on different processors depending on their availability. It does not bode well for reliable systems development.
Prof Lee claims our models of how software and the physical world interact need to change. We do not necessarily need to throw away deterministic models such as single-threaded code but incorporate them better into more realistic cyber-physical models. Using processors that have instructions and architectures that take into account timing is one direction.
That has begun to happen with processors such as the Xmos architecture developed by Professor David May of the University of Bristol. The key, says Prof Lee, lies in model-based design in which the interfaces between different types of model are much more explicit and better-defined.
Software engineers can continue to work with imperative software, but the tools are designed to make this model fit better into physical models that may rely on continuous differential equations. To deal with the potential complexity of massively distributed systems, Prof Lee proposes the use of 'aspect-oriented modelling', borrowing from the concept of aspect-oriented programming that has begun to be adopted in some software circles.
The problem with many model-driven developments today, argues Prof Lee, is that although they increase the level of abstraction, they easily become overly complex. Some Matlab models of large industrial systems can contain hundreds of levels, each with a thousand elements within it.
"Some people quickly become proud by the complexity of their models - but you should be embarrassed by that complexity," says Prof Lee. "Models should be simple. A control-systems engineer doesn't care about the details of the network of the system, but [they do] care about the processing latencies."
The idea behind aspect-oriented modelling is to preserve abstraction for engineers who do not need to see the underlying details. Network or software engineers can drill down into those parts of the model but, at the same time, see only a high-level representation of the control algorithm.
As a result of changes like these, the focus is moving up from the level of the individual controllers. "There is definitely a move towards systems-level thinking," according to Rob Oshana, director of global software R&D at Freescale Semiconductor. "We are looking for more analytical thinking. That is going to be a skill set that becomes very important going forward."
Re-engineering the world through a massive roll-out of sensors and electronic controls will depend on our ability to re-engineer the world of software.