Audi AI
Exclusive

Auto correct: what will it take to put autonomous vehicles on our roads?

Image credit: Jonathan Wilson

We look at the various complementary technologies required for self-driving vehicles to achieve true autonomony.

Autonomous vehicles are advancing at a pace that is leaving even some industry veterans breathless. At CES 2017, Audi stated its ambition to have its first artificial intelligence (AI) car on the road by 2020, with its auto rivals and first-tier suppliers converging on 2021 as the likely date for the autonomous vehicle tipping point.

Speaking to E&T at CES, John Eggert, from leading lidar specialist Velodyne, said: “Most of the market has announced the aspirational goal of getting fully autonomous vehicles out there by 2020, ‘21, ‘22. The first movers will be people with the best business case – the robot taxis, the transportation on-demand people – where they can afford a more extensive sensor suite on their vehicles with the elimination of the driver. We suspect that you’ll see fully autonomous vehicles in a fairly ubiquitous way by 2020.”

Torsten Lehmann from NXP, one of the leading suppliers of semiconductors to the automotive industry, agreed, saying: “I’m convinced the technology is ready, but it will be a lot about solving some of the other pieces of the puzzle as well as bringing the right ecosystems and partner networks together, where really different players can bring different key competencies in.” 

At CES, there was a palpable sense of industry convergence and consensus. No one can afford to be out of this game. As TomTom’s Willem Strijbosch said: “Everybody is talking about autonomous driving. It’s going to happen even sooner than some people expect. If you dream X years away then nobody will need to do their own driving anymore, we won’t have parking places in the cities, we won’t have congestion anymore, we don’t lose time because we can work in the car, you name it, right? That’s why it’s so attractive, and that’s why so many people are focusing on it. The societal benefits are just enormous.” 

Sensors: complementarity is key

“Your car really needs to have enough eyes and ears so that you can replace the driver in a safe manner,” says NXP’s Lehmann. “That means not only forward-looking sensors, but really 360 degrees, a complete cocoon around the car, be it with radar sensors, with cameras, with lidar sensors and so on, and then the respective processing horsepower, the software and the algorithms to do proper fusion and proper assessment. My impression is that most people are converging that you need this 360-degree cocoon. Of course one carmaker might do it with six radar sensors and another with eight and then a little different software and so on, so there will be different flavours, but I think it’s heading that way.”

Cameras: eyes on the road, hands off the wheel

Cameras were one of the first technologies adopted for autonomous vehicles, for obvious reasons, as Gergely Debreczeni, from Hungarian AI software solution developer AImotive, explains: “We are focusing on camera-based self-driving solutions because the complete traffic system today has been built using visual cues. Just like humans, the cameras can perceive visual information and the camera data delivers the most intelligence. Where lighting conditions or visibility are not good enough we will rely on our secondary sensors, like radars, ultrasonic sensors or lidars. We can combine the data of all sensors in order to have a redundant and safe perception of the world and environment around us.”

Sequino reiterates the need for multiple, complementary sensing technologies, as relying solely on one type is clearly unsafe: “The sad, unfortunate event of [last year’s] Tesla crash was disappointing, but it had a camera and that’s it. If the tractor-trailer and the car both had a V2V (vehicle-to-vehicle communications) box, that would have never happened. It’s going to require multiple technologies converging to a central processing unit to decide what the algorithm is to make the decision.”

Radar: multiple layers

At CES 2017, Wayne Williams explained the complement of technologies on Ford’s autonomous vehicle: “This is the car that we’re using to develop the algorithms and sensors. We’ve taken the next-​generation lidar from Velodyne and reduced the number to two. We’ve also added racks with six cameras: two that give us stereo out the front, two giving stereo out the back and then one to each side. We’ve also got six radar systems mounted around it. That gives us 360-​degree coverage from all three sensors. In order to make a car drive by itself, we think of there being four different layers. We’ve got the sensing layer – the radar, the camera, and the lidar. All that raw data comes in to what we call the perception layer. That feeds in to the decision-making layer. Finally, there’s the actuation layer.”

Lidar: around the next corner

Lidar is possibly the sensor technology that holds the key to releasing the potential of autonomous vehicles, as it offers beyond line of sight data (e.g. around the next corner), enabling the autonomous vehicle to be aware of what to the human driver is an unknown event. Velodyne’s lidar sensors are used by the majority of major automotive manufacturers, as the company’s John Eggert noted at CES: “I think that no one is seriously talking about level four [autonomous] driving without lidar. Often you hear about the three sensing modalities. Traditionally, it’s been camera and radar and now lidar is added to that. I think all three sensing modalities, plus a high-definition map (that’s almost the fourth sensing modality) are absolutely vital to make a vehicle drive by itself.”

Software: code on the road

As our cars become just more objects connected to the internet, they will face the same smart and IoT security issues, as David Sequino, from security certification specialist Green Hills Software, explains. “As we move towards autonomous vehicles, we’re adding more and more lines of code, so we need to do a much better job in testing our software, delivering a secure layer, issuing certificates to every vehicle and signing every piece of software that runs on a car.”

NXP’s Lehmann agrees: “A very important element is that we make the car hacking-proof, so you don’t have any attackable surfaces from the outside, which is really important once you’re connected with the cloud, once cars are connected with other cars and connected with the infrastructure.”

CPU: brain - the truly intelligent car

With potentially terabytes of data pouring in to the central processing unit (CPU) every minute, the brain of the autonomous car is going to have to think decisively and make decisions quickly. As NXP’s Lehmann says: “It’s always about sensing, thinking and acting. Sensing your environment, with all the different sensor inputs, and then processing all data you have collected. Also, do sanity checks, make sure that you always have a certain redundancy in your sensor inputs, and fuse all that data in the processor domain. Then act upon what the car is sensing and seeing around you.”

How the autonomous car reacts will largely be down to its education. There are two principal schools of thought: programming or AI. At CES 2017, Nvidia announced its partnership with Audi to pursue the latter. “We’re going to put an AI car on the road by 2020,” says Nvidia’s Bea Longworth. “This car hasn’t been programmed to do anything. It’s learned to drive from a human, from training data provided by human drivers, [through] ‘deep learning’. In the car [an Audi Q7] we have our Drive PX2, which is the brain, and layered on top of that are the deep neural networks. We believe the solution will be a combination of many neural networks, each of which will be trained and responsible for a part of driving, such as pedestrian detection, lane detection or collision avoidance.”

Controls: in-cabin experience

As the driver becomes just another idle, hands-off passenger, more attention will focus on “infotainment and user interfaces and user experience”, as NXP’s Torsten Lehmann describes it. How our cars will interact with and entertain us is “the other big growth area: anything to do with radio, audio, acoustic experience and active noise cancellation, things like that, as well as HMI graphics, user interfaces, all the typical instrument clusters disappearing and being replaced by high-​resolution displays, fancy graphics,  reconfigurable clusters and customisation.”

Maps: show me the way to go home

Mapping company TomTom is already developing next-generation digital maps. Willem Strijbosch explains: “Autonomous driving is about replacing the human driver with a robot. A human has senses, a brain and limbs. A robot driver will have sensors, a computer and actuators. We supply a large part of the brain, the HD map. Think of it as a drone flying over your car that has vision everywhere, in front of you and around the corner where your sensors can’t see.”

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close