Some dull boxes on an ugly street

Can your server keep a secret?

Image credit: Getty Images

If you’re worried about cloud computers leaking data, help is on the way - but you might not like the performance.

The designers of silicon chips have a lot to worry about. It can take months for a design to make it through the fab and, if things go wrong repairing any problems in the circuitry, is a hugely expensive endeavour, not least because it costs millions just to make the intricate masks that print the design onto the silicon wafer. It is at the tape-out point that computing infrastructure becomes the crucial bottleneck.

At the Design Automation Conference last year, AMD silicon design engineer James Robinson described how it can take 24 hours or more to run the vital checks on a microprocessor design that ensure the silicon that returns from the fab weeks later doesn’t turn up dead on arrival. That can add up to a lot of unwanted delay if the software finds fixes that need to be made, and it takes a number of attempts to get it right. Moving these huge compute jobs to cloud arrays of up to 4,000 cores meant the design team could “run that overnight instead. This has made a real difference to turnaround times for us.”

AMD is, compared to many of its peers, an early adopter for this kind of application. Many other companies in the sector have resisted the shift to cloud computing until the case for using it becomes too compelling because they fear what would happen if the data were to be intercepted by a competitor. In previous booms of remote computing, such as the grid computing of the mid-2000s, the chip designers stayed out of the picture even though their tools suppliers were happy to set up the servers. The change at companies like AMD is through necessity more than anything. “When you are in a tapeout crunch, where do you get 4,000 cores?” Robinson asks. The cloud is the only realistic answer.

Even though cloud suppliers are reporting booming profits, numerous surveys demonstrate the problem customers have with the idea. Cyber Security Insiders claimed in 2019 that 93 per cent of the organisations they asked were moderately to extremely concerned about cloud security. A survey across the US and five countries in Europe on behalf of property company Savoy Stewart found tech companies on average to be less distrustful of cloud computing than those in finance, healthcare and insurance, where privacy legislation tends to be stiffer. Some 85 per cent of those surveyed in finance said they distrusted cloud computing, with the primary reason being the risk of data leaks.

The one protection that the chip designers have is that it is hard to exfiltrate the terabyte-size design databases they upload for their pre-tapeout checks and they are quick to remove the remote data once the job is finished. For smaller databases, or those that sit in place for long periods of time, cloud users now will often keep “at rest” data in encrypted form to limit the potential for hackers to break into a machine’s storage and obtain sensitive information.

In principle, all they get is an encrypted bundle of data that they may never crack open unless they are lucky enough to uncover the decryption key. This assumes that cyber criminals are not simply spying on the software running on the server and picking up information as it is processed while it is in its non-encrypted state. The Meltdown and Spectre proof-of-concept attacks demonstrate how programs can leak data to other software. Some hacks can be even more direct: planting Trojan horse devices into the hardware itself, although few real-world cases have been demonstrated publicly.

In the closely controlled world of the cloud server, contained in racks behind locked doors in guarded buildings, low-level software and hardware hacks are difficult to pull off. Even if a hack works, exfiltrating sufficient data to pull off a data theft can easily ring alarm bells. But this may change with a growing trend to push server capacity to the edge of the network.

Arm last year kicked off a major R&D programme into edge computing that it calls Project Cassini, aimed at developing technology for smaller servers that sit at the edge of the internet. Rob Dimond, system architect and fellow at Arm, says: “With the volume of data that’s going to be generated by IoT devices and the volume of analytics that will need, access points and other systems at the infrastructure edge will need to evolve. It needs to become a first-order compute platform. We estimate these areas as providing a significant silicon opportunity for our ecosystem, so we want to enable that.”

Those edge systems may well end up in green boxes by the side of the road or utility cupboards in office basements – much easier for a hacker to introduce their own surveillance or exfiltration devices to pick up sensitive data during processing.

‘With the volume of data that’s going to be generated by IoT devices and the volume of analytics that will need, access points and other systems at the infrastructure edge will need to evolve.’

Rob Dimond, Arm

What if you do not need to decrypt the data at all in order to work on it? Anything the hacker exfiltrates they still have to decrypt using brute force. Secure in-place processing is the promise of homomorphic encryption, a concept originally developed in 1978, though at that time it was no more than an interesting theoretical concept in cryptology.

If you try to perform arithmetic on data encrypted using conventional algorithms it will simply turn into gibberish. Do the same to data encrypted homomorphically, you will get the answer you expected when you apply the decryption key. But no observer can know what you sent to the server, what you received or any of the intermediate answers unless they have the same key or can exploit a flaw in the algorithm.

It was more than 30 years before the concept became a practical reality, thanks to a PhD thesis written by IBM researcher Craig Gentry while at Stanford University. Professor Daniele Micciancio of the University of California at San Diego says Gentry’s work propelled the concept to become one of the most popular areas in cryptographic research over the past decade. “Although [Gentry’s] solution was not ideal, it achieved a major change in attitudes. He convinced the community there was a solution. The solution turned out to be very influential in itself,” he remarked at the 2019 Eurocrypt conference, pointing to the fact that more or less all of the proposals so far are based on the same core construction.

Micciancio likens the core concept of homomorphic encryption conceived by Gentry to the pantographs that let an artist enlarge a sketch into a much larger drawing. The pantograph’s coupled levers translate each small movement into a much larger one. Although they are not vital to homomorphic encryption, a way of achieving the same kind of projection effect for arithmetic operations like addition and multiplication is to use lattices. In mathematics, a lattice orders numbers into a multidimensional grid. It is by forcing operations to only hit points within the lattice that homomorphic encryption ensures any operations do not turn the result into gibberish, although that is an ever-present risk.

One advantage of using lattice-based systems in encryption is that, at least for the moment, researchers believe them to be much safer from attack by quantum computers than many of today’s protocols. But homomorphic encryption does not come without a significant cost. To make it possible to do more or less anything to data while it’s encrypted, you pretty much have to recreate an entire computer in software.

A common technique for full homomorphic encryption is to break the data down into binary values and work on each bit separately. One example is NuFHE, published on Github as a proof of concept by start-up Nucypher. This implements a set of logic gates that work on the encrypted data, similar to those used in hardware processors that run inside the library’s virtual machine.

As the compute function needs to be built from basic logic gates and the number systems that represent each data bit are much more complex than binary, the application grows by orders of magnitude. A megabyte of data can easily swell to many gigabytes once encrypted in a typical fully homomorphic library. However, the encoded data can be highly amenable to compression, so that only relatively small parts need to be expanded out for processing before being recompressed for storage.

A further risk with so-called bootstrap processing is that some protocols run the risk of introducing too much noise to provide a useful result at the end after a long string of operations. However, safe bootstrapping, a more recent development, promises to avoid that problem.

Hardware acceleration is one avenue for improving the overall performance of the technology. As well as start-ups such as Nucypher, IBM and Microsoft have developed R&D-grade libraries for homomorphic encryption. Systems such as Gentry’s were 12 orders of magnitude slower than regular processing. But there are options for improving performance.

A lot depends on how the user structures their algorithm and how they are run on the underlying hardware. “Homomorphic encryption is massively parallelisable,” claimed Microsoft researcher Kim Laine in a seminar organised by the company on the technology last year. “If your computation is highly parallelisable, it’s probably a good match. If not, we’ll probably want to restructure the application. But it makes it hard to answer the question ‘how long does it take to do something?’.”

Thanks to techniques that exploit the parallelism of the arithmetic accelerators now found in general-purpose processors and graphics processing units (GPUs), Laine says the homomorphic slowdown is now closer to three orders of magnitude.

According to Laine, Microsoft is now looking at the potential for hardware acceleration. There are acceleration techniques based around the Fourier transform, already used in libraries such as Nucypher’s, that optimise well for GPUs. Baking those algorithms into hardware can provide a further speed boost because they can support the modulo arithmetic these algorithms need. A processor will generally need several operations to perform a calculation and then find the modulo result. However, with homomorphic encryption being at such an early stage and still a moving target, it’s not yet clear what the best hardware architecture will be. As with acceleration for machine learning, the cost of developing accelerators may be too much to bear unless the technology takes off. In the short term, accelerators are more likely to be based on programmable gate arrays, such as those made by Intel and Xilinx.

Another way to deal with the processing overhead is to sacrifice flexibility. ‘Levelled’ processing tends to be faster than bootstrap but is far more restricted. For example, you might be able to add and multiply numbers but not have any way to compare them, a vital operation for making decisions on what a program should do next or implementing loops. The inability to perform what are generally considered to be fundamental elements of compute may not be a show­­stopper if you can build hybrid applications. These hide some operations but do a lot of the housekeeping functions more openly.

Researchers such as Professor Berk Sunar of Worcester Polytechnic Institute see blockchain as a potential application area for homomorphic encryption. Bitcoin and other popular blockchains not only call for processing to be performed on completely untrusted machines but for all of it to be done in the open. This is a restriction for some organisations. Although some people see blockchains as useful for cataloguing health records, the tight privacy legislation around the sector makes a regular blockchain implementation almost inconceivable. If data can be encrypted and even worked on by authorised users, such a blockchain can offer the guarantee that records have not been compromised without disclosing their content.

Driven by regulations such as the recently tightened EU anti-money-laundering directive, financial institutions have begun working with another limited form of homo­morphic encryption where, rather than working on the encrypted data, the encoding is used to hide information. They see this as a way of sharing account data while protecting sensitive data like customer names, which get converted into anonymous tags. All other information is handled normally.

In a workshop organised by the UK’s Financial Conduct Authority (FCA) in 2019, a number of groups used forms of homomorphic encryption to cloak sensitive data: usually the information that would reveal the owner of each account. Citadel, the winning team, used a system developed by specialist Privitar to encode the ownership data on account data pulled from a number of institutions. To enable organisations to share data without any one of them having privileged access, Privitar’s system works by having an intermediary computer apply the encryption to data requested from a server by a client.

The overhead of homomorphic encryption, which even with hardware acceleration is unlikely to go away short of a further mathematical breakthrough, does not ease the concerns of the chipmakers: their databases are way too big to take advantage of the technology. But for the operations that are the most sensitive, the technology does offer a way to overcome suspicions as to how untrustworthy computers might be.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles