Driverless car hacking: how AI ‘nannies’ will stop cyber criminals from taking control

Driverless cars are touted as a reinvention of the travel paradigm that will change the way people think about journeys and interact with vehicles.

With artificial intelligence (AI) replacing the traditional human driver, concerns are ramping up that they could be exploited through malicious attacks and turned into weapons.

If a cyber criminal managed to seize control of a driverless vehicle or fleet of vehicles by hacking the internal software, they could cause untold damage and even deaths.

Nvidia recently announced its Drive PX Pegasus chip (below), a miniaturised computer brain that is designed to be commoditised, mass produced and used by automakers to bring driverless capabilities to their range. Although Drive PX was originally released in early 2015, the Pegasus iteration is the first that can take full control of a vehicle without any human input. 

Danny Shapiro, NVIDIA’s senior director of automotive, explained how his company was taking strident steps to keep hackers out of their system.

“People have heard of cars being hacked over the last couple of years,” he said. “Many carmakers tried to make connected cars without actually securing the rest of the system inside and so it was very easy to access the vehicle.

“We started from scratch by building a computer that is designed for AI and has strong cyber security capabilities. We have encrypted data built in, and mandatory authentication where all software installed on the system has to be signed.

“We use other techniques like virtualisation so we can create firewalls and separate any kind of mission critical components from other components so if you had the ability to download apps from the Android app store for example, those are not even able to access the driving critical portion of the software.”

Overseeing all these systems is an AI ‘nanny’ that will constantly monitor how the software is interacting with the vehicle. If it sees something is awry it will attempt to either prevent the unwarranted action from taking place or bring the car to a standstill without causing an accident.

“It can analyse any kind of traffic over the bus,” Shapiro said. “For example if the tyre pressure monitoring system is trying to do a software update, we know that is not correct behaviour and so we can sort that out before it has any kind of effect on the vehicle.”

Pegasus is capable of processing 320 trillion operations per second and will run multiple different software algorithms simultaneously. Some of these will be analysing the traffic within the vehicle’s internal network.

Throughout computing history, no system has ever been deemed to be totally hack-proof. The war between hackers and the developers of operating systems is ongoing, and regular security patches are a necessity regardless of whether a device is running Windows, Linux, Android or iOS.

When this was put to Shapiro, he admitted that no system is perfect but that Pegasus’s over-the-air software updates should mean that any holes found in the security would be closed very quickly.

“We have a lot of backup systems and backups for backups so if something does happen, regardless of whether it’s a failure of the system or some kind or some form of hacking, there are then multiple systems that will take over if something is detected and there’s an issue.

“Maybe the full driverless functionality won’t be available but it will retain enough function to be able to safely pull the car over. That’s something that is in the legislated safety standards to ensure that these systems are extremely robust.”

In the unlikely event that all these systems and safety checks fail or a critical error is encountered, then a crash could still occur.

Driverless cars are predicted to eventually cut accidents by 90 per cent, but with millions of vehicles expected to hit the roads as the technology matures, who will be held responsible for the inevitable crashes that do occur and are deemed the fault of the AI?

Volvo has already come out and publicly stated that they will bear responsibility for any mishaps that take place with its first foray into autonomous vehicles, “Drive Me”, which is incidentally also using Nvidia’s hardware.

The pilot project is already underway with cars autonomously driving along certain roads in Sweden.

Shapiro agrees with this approach: “It makes sense, right? The passenger shouldn’t be held responsible, if you get into a taxi you’re not held responsible, it’s the taxi company, so in this case it’s the car company that will be.

“We obviously have to be responsible for our system, but we have arrangements with the car companies that if anything ultimately goes wrong with the car, they’re the ones who are accepting responsibility.”

For the first time AI will be responsible for answering some ethical questions, the likes of which even humans struggle to answer.

For example, should the driverless system swerve into a wall, potentially killing its lone passenger, in order to prevent the deaths of some children playing in the middle of the road?

Shapiro was reticent to talk about specific scenarios, but admitted that although the system doesn’t have an ethical consciousness, many lives will ultimately be saved when the human error factor is removed.

“We’re not programming ethics, there’s no way you can expect a computer to make a decision about ethics that a human couldn’t even make or ever have to make,” he said.

“The reality is that most accidents come about because of human error and most cases involve some form of driver distraction or bad judgement: people not timing the light correctly or how long it takes to stop.

“What’s happened lately is that all these accident and fatality rates are going up because of people on their mobile phones, people looking down into their laps. So when you remove distracted driving. aggressive driving, speeding, drowsiness and drinking, the risk factor will be greatly reduced.

“These cars are monitoring a full 360 degrees around them and classifying everything that is moving around them. So when it sees a kid with a ball or it sees a woman pushing a stroller, it’s going to be tracking that and aware of that well in advance and will have already slowed down and changed lanes. So that ethical dilemma won’t even arise because the car will have plenty of time to stop.

“Planes still crash, I don’t there’s a 100 per cent guarantee of anything in the world and there are the laws of physics, something could fall right out of the sky and there is no way that that car could stop, and so it will hit something.

“I think what we’re going to see though is that the vast majority of accidents will be eliminated with this technology and as a society hopefully we recognise the benefits of the millions of lives that we can save every year even if we can’t save every single one.”

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close