Tesla dashboard

IoT security threat as embedded systems struggle

The world of embedded systems has met the Internet and its horde of hackers. The result is not pretty.

It’s possibly the ultimate in faint praise for the state of today’s embedded systems. When he decided to try to hack into it Marc Rogers was pleasantly surprised about the protection Tesla’s engineers gave to the electronic control systems of the Model S car. The CloudFlare principal security researcher’s previous experience with embedded systems that are beginning to form the Internet of things (IoT) was far less encouraging. “The way they designed it, I was not expecting to be impressed. Everything IoT I’ve hacked in the past has been a shambles. The Tesla was quite a pleasant surprise. We were able to compromise it, but it was much harder than with many other systems. The things they had built were the right things,” Rogers says.

Before Rogers gave a talk at the Defcon hacking conference in the US in August, Tesla provided customers with a software patch that fixed three of the most serious vulnerabilities that he and colleagues had found and were about to make public. “That patch made the car safe,” Rogers claims.

Chrysler was not so lucky. The company ordered a recall of 1.4 million vehicles affected by the same error as one found in a Jeep by security researchers – a problem that let them take control of the vehicle by radio through the infotainment system.

Yet, despite its relative performance, Rogers noticed that the Model S developers made numerous small mistakes that made it easy for him to hack most parts of the dashboard electronics. In one example, a plug-in flash-memory card contained a script that had the incidental job of looking for updated maps online. However, the script was allowed to run at the highest level of privilege by the operating system. “A script that runs as root on a removable card? That’s not very clever,” says Rogers.

For Ken Munro, director of security analysis firm Pen Test Partners, the embedded-systems industry is way behind mainstream IT even though that sector is still no poster-child for safeguarding data. Speaking at the first seminar organised by microelectronics organisation NMI’s recently formed task force on IoT security in late 2015, Munro pointed to the use of the old Hayes AT modem command set with no encryption protection used by an Internet kettle: “It’s why I love the IoT. It’s like going back in time.”

The protection for the program that allows a remote user to send commands in plaintext to the kettle? A default password of six zeroes. This is far from unusual: many TVs and personal video recorders use similar default codes to provide access to their service-engineer modes. But the implications of such an obvious choice of password change as embedded devices become connected to home networks and, by extension, the Internet.

Wi-Fi devices are, by nature, promiscuous. A common attack used on public access points is to set up a router with a stronger signal and spoof the codename (SSID) of a well-known service. Computers that recognise SSID will, in the absence of any other authentication mechanism, attempt to connect to it. The fake router may pass traffic through to the correct router to give users the impression they have connected to a legitimate device. But the hacker has set the stage for a successful man-in-the-middle attack.

In the case of the Wi-Fi kettle, Munro says: “I can drive past the house and connect to the kettle based on the SSID because I am using a higher-power signal and recover your Wi-Fi key in plain text.”

How do you find kettles to attack? Like most of the Internet, use a search engine, says Munro. IoT-centric sites such as Shodan and Thingful help users discover and locate devices that broadcast their presence. There are plenty of other examples of poor security planning in IoT and embedded devices.

The doll My Friend Cayla, another target of Munro’s investigations, offers the opportunity to make a supposedly child-safe toy that was fitted with profanity filtering software by its manufacturer swear like a character in an Irvine Welsh novel. Although hacking the device offers the opportunity for espionage in a family home through the built-in microphone, in practice the serious motives for hacking a toy doll are likely to be limited. The greater fear is of devices that exert some form of control on physical objects, such as a moving vehicle or a piece of industrial equipment, and which are not just proof-of-concept hacks of the kind that appear at conferences.

The Stuxnet worm had a devastating effect on what were thought to be its intended targets – centrifuges used in Iran’s uranium enrichment programme – and had consequences for other users of Windows-based programmable logic controllers (PLCs). The idea of a Stuxnet applied to cars provides a most graphic example of just how badly network connections can result in harm that developers are only just beginning to think seriously about.

“We’ve shipped something like a billion devices a year over the last decade,” says Geoff Lees, general manager of NXP’s microcontrollers business unit. “The vast majority of them were not connected [to the Internet]. But now there are requirements for vehicles and other systems to be connected, often under the conditions that the connection environment they will eventually be in isn’t today fully known. Many customers have relatively little experience in this field. The key challenge is to tame the Wild West that’s the network today.”

Not every embedded device falls short on security. Some sectors have had to dealwithcybercrime for decades. Because cards and codes that unlock encrypted channels and pay-per-view content provide crackers with high sales and profits, the suppliers of TV decoders soon became aware of successful attacks on their software and hardware. The only reliable defence was to practise design security.

Root of trust

Mobile phones have core security modules that protect not only some of the data and programs inside the device but the cellular network itself. The applications on smartphones can easily be hacked, but anything that involves the SIM card that stores the user’s credentials involves at least the same level of hacking skill as a pay-TV decoder.

The SIM card in a phone provides a ‘root of trust’: an element that software outside the card can reasonably trust. The root-of-trust concept is fundamental to design security as it provides a way for individual pieces of software and hardware to authenticate each other and so apply a level of trust to each other. Even in a system that has been compromised at one level, active authentication of software modules through the root of trust limits the damage an attack can do.

Rogers points out that many embedded systems, as with enterprise networks, assume a perimeter model: “It’s hard on the outside, soft and chewy on the inside.”

It is not a good model for network-connected embedded systems, Rogers advises. “In IoT systems you have to assume that every exposed component will be compromised. It’s very likely that a web browser will get hacked: it’s one of the most popular hacking vectors. They will always be able to compromise something. You have to stop them compromising any more of the system.”

The element of protection that surprised Rogers about the Tesla Model S was its use of a gateway between the infotainment system – the part of the car that connects to the Internet – and the engine and steering controls. The gateway prevents hackers such as Rogers from breaking into the car’s control network through the relatively soft underbelly of the infotainment software.

It may seem odd to put engine controls and infotainment on the same network. But many car makers now use Ethernet to download software updates to the microcontrollers around the vehicle because the network protocol is so much faster and more flexible than CAN, the communications bus that relays data between the real-time controllers.

The gateway reinterprets any commands it receives and will only allow certain types through. For the most part, these do not affect steering or braking when the car is travelling at speed. Although the gateway exists and Rogers was not able to punch a virtual hole in the gateway software during his initial work on the Model S, a defect might still exist that allows arbitrary commands to be fired through.

Like many others, Rogers says embedded systems developers should anticipate shipping patches for software long after shipping to lock down systems as breach points are found. And there could be a lot of patches if developers are not careful.

Needle in a haystack

A major problem with embedded software today is one of scale, says Thomas Cantrell, now software development manager at Amazon Web Services, who studied the issue while technical marketing manager at Green Hills Software last year. Although low-end sensor modules will often use less than 100KB of code, the gateways and routers they talk to can demand megabytes for the Linux operating systems many of them now run. Using data of updates collected over five years for vulnerabilities found in software developed for Internet applications, the average number of discovered security defects hovered around 0.05 per thousand lines of code.

A recent build of Yocto – a Linux distribution developed specifically for embedded systems – contained 33 million lines of C source code, Cantrell says, plus 3 million lines of assembler and scripts. An IoT gateway device might use a stripped-down subset of that code base. But even with a device built using a subset of 10 million lines of source code, Cantrell says analysis of known error rates indicates some 500 vulnerabilities would be found within five years.

To try to limit the possible damage, the idea of a gateway wrapped around a root of trust provides a powerful design technique if used wisely. It lets developers focus maximum effort on critical functions without having to worry so much about errors in less important software compromising everything. Virtualisation makes it possible to put almost all the software in a system into a sandbox. Only a hypervisor has access to everything in the processor and its associated memory and I/O and every transaction that tries to access low-level hardware has to go through the hypervisor.

In principle, being much smaller than a Linux install, the hypervisor code can be scrutinised for security holes more readily. Green Hills, for example, put its Integrity hypervisor kernel through certification to EAL6+ at the National Information Assurance Partnership (NIAP). The highest level possible is EAL7, which requires the entire software build be verified mathematically. Fox-IT’s DataDiode, which guarantees one-way communication into secure networks, is one of the very few pieces of software to be verified to EAL7.

The problem for the hypervisor is to determine that it runs on trusted hardware itself and has not been installed inside someone else’s sandbox so that they can perform a man-in-the-middle attack. The Trusted Processing Module for PCs and SIM cards can provide those functions but ARM and Imagination Technologies are introducing secure modes for their processor cores that allow the creation of a root of trust without having to rely on dedicated crypto-controllers. ARM, for example, intends to put its Trustzone-M protections into future versions of the Cortex-M series of microcontrollers.

Trustzone-M acts as a gateway within the processor, only allowing accesses to the secure side through known ports that should be well protected by code that checks for unexpected or illegal commands and which checks their provenance. Ultrasoc, which develops low-level debug technology for system-on-chip (SoC) devices, is adding circuitry that will detect and warn of breach attempts, working independently of the hypervisor.

“You still need secure hypervisors, MMUs and other protection mechanisms but we are saying the logic for debug and measurement can be used to give you an additional level of security,” says Rupert Baines, CEO of Ultrasoc. “You have the hardware locks; we are the burglar alarm.”

Even when the protection hardware is in place, companies have yet to learn to use it effectively. Rogers points to Tesla’s use of ARM processors that use the form of Trustzone developed to work in smartphones. But the developers made no use of the hardware features. “They have all these tools there but didn’t use them,” he complains.

“Things like this are the oxymorons of Tesla’s development process. They had designed in the elements of a secure system but somebody somewhere cut corners.”
If anyone is going to find where those corners have been cut, it will be the hacker.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them