Intel workers in orange protective suits

A Meltdown in computer architecture, as Spectre haunts Intel

Image credit: Intel

Since the Meltdown and Spectre attacks were exposed, processor manufacturers and operating system providers have been struggling to cope with the implications of a wide-ranging group of security flaws that are very tricky to fix becoming public before they could even put a first line of defence in place.

The attacks described by the authors of the Meltdown and Spectre are less troublesome than some of the others that have made the headlines in the past few years. They are about information leaking rather than providing ways of letting a hacker take control of a system. Both rely on the hacker already being able to load software onto a computer, typically using a Trojan horse program.

Although Meltdown and Spectre use novel techniques, they fit into a much broader class of side-channel attacks that let hackers eavesdrop on the processing performed by supposedly secret code. A decade ago, researchers from the Weizmann Institute of Science in Israel found a security loophole in the caches used to hide the huge discrepancy between the speed of processors and the memory they use to store large quantities of data. Their exploit, called Prime+Probe, made it possible for software to spy on other applications running alongside it even in situations where the processor and operating system were meant to isolate them.

The hack works by having the spyware fill up cache lines with garbage and continually poke at the memory subsystem to work out when other software made changes. Subtle timing differences would indicate how the target application was making decisions on the data it processed. Since then a game of cat and mouse has ensued with attacks and countermeasures appearing that take advantage of the many novel instructions processor makers such as Intel have added to their designs.

The TSX extensions for transactional memory, for example, have been proposed as mechanisms for channelling secret data to spyware as well as protecting against attacks. Transactional memory was invented to make programs that run across multiple cores more efficient by letting them run speculative operations safely. That speculative execution means the programs don’t have to keep stopping to check everything is in sync. Speculative execution plays a key role in the Meltdown and Spectre attacks.

When they generate code, compilers take a best guess at the best order for instructions but have to err on the safe side. Processors can take more risks with instruction ordering when it comes to execution because, if things don’t work out as planned, they can roll back or cancel instructions that should not have run. Typically, this provides a solid speedup around the branches in code that would otherwise stall execution.

Meltdown, specifically, uses the side effects of speculative execution – such as data being left inadvertently in caches – and harnesses TSX and similar mechanisms to access the data. Spectre is a more wide-ranging set of attacks that attempt to exploit other side effects of speculative execution. Now, operating system writers and processor makers are working out ways to plug the holes which the push for performance has had on security. At the same time, the processor designers are trying to come up with ever more exotic ways to overcome the performance limitations of memory and multicore operation.

The Spectre paper authors point to AMD’s claims for its Ryzen processors, which apparently sport “an artificial intelligence neural network that learns to predict what future pathway an application will take based on past runs”. Good luck assessing the intricate security implications of something that changes behaviour as it runs.

The authors argue: “Long-term solutions will require that instruction set architectures be updated to include clear guidance about the security properties of the processor and CPU implementations will need to be updated to conform.”

Although this will probably happen, it might not make that much difference. The practical long-term solution seems more likely to be about a better separation of what is secure and what can be considered unsafe and to take advantage of the economics of silicon. Transistors are cheap and, for some time to come, will get cheaper. Moving secure operations into parallel, firewalled processors provides the best option for guaranteeing security properties that do not need years of analysis to create. They can use their own private caches or local memories to do their work away from prying spyware. The tools that can formally verify the protections are already available.

Understanding which operations need to be moved behind the security barrier and which can be allowed to be visible becomes a job for the software architects. It has been done in industries such as mobile telephony, which are now used to the idea of the SIM. As such changes involve rethinking the architecture of operating systems and other support software, they are not quick fixes. So there will be pressure on the processor designers to compile reams of text on the security properties of their creations in the shorter term. However, trying to continually fix the leakage problems of features intended to keep software running fast is ultimately doomed to failure. The only practical solution is true hardware-enforced and software-managed separation.

Read more about the design flaws in Intel chip architecture in the original E&T news story, published earlier this month.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them