Rinspeed’s ∑tos concept

Driverless cars: computer control takes the wheel

Image credit: Rinspeed

It’s likely to be a long time before drivers are redundant, but cars are taking on more and more tasks under computer control.

Imagine a car where the windscreen features both virtual and augmented reality technology, your seat feels more like an armchair and the steering wheel folds up to become a cup holder and a keyboard. Solar panels on the roof provide the power and you’ll never worry about fitting into that tight parking space, with the car’s ability to rotate on the spot embarrassing even the world-famous London black taxi.

Designed to challenge the way we think about travel and transport, the Rinspeed Oasis is a concept car that, in reality, conforms to many of our preconceived ideas of what a driverless car will be like. An advanced driving assist system powered by artificial intelligence (AI) that continues to learn, accurate real-time HD maps and a computer powerful enough to process all of this while showing your favourite film. It’s feet up, not foot down to reach your destination.

Unveiled at January’s CES show, the Oasis had to fight for space as some of the world’s biggest automobile manufacturers including Toyota, Nissan, BMW, Volvo, Honda and Hyundai used the show to unveil their strategies, plans and concepts for adopting at least some degree of automation within their future designs. While futuristic concept cars like the Rinspeed Oasis and Toyota’s Concept-i may have grabbed headlines as visions for the future, the reality is that automated cars are coming – and sooner than you think.

At the show BMW claimed that its partnership with Intel and Mobileye would see a ‘Level 3’ (see below) automated vehicle on the road by 2021. Dubbed the iNext, a potential legal challenge from Apple over the name is perhaps the least of their worries given the complexity of integrating automated driving technology into a brand new design. But, with an estimated R&D budget of $5.5bn every year, BMW has made a bold move into a market that’s primed to explode.

As 2021 approaches, just when and where will we see driverless cars on the road?

“Most supposedly driverless cars aren’t actually driverless,” states Professor Oliver Carsten of the Institute for Transport Studies at the University of Leeds, who is an expert adviser to the European Transport Safety Council and the UK Government. There are various levels of automated driver assistance, he explains.

In 2014, trade body SAE International published a classification system for automated vehicles which has been widely accepted by the industry and regulators. The scale ranges from Level 0, where the vehicle has no control but may issue warnings, to level 5 where the only human action is setting the destination and starting the system.

To make it easier to understand, the UK’s Centre for Connected and Autonomous Vehicles describes levels 0–2 as Hands On Assisted Driving; Level 3 as Hands Off, Eyes On (the road); and Levels 4 and 5 as Hands Off, Eyes Off.

Tesla may claim its cars will be capable of Level 5 automation by 2018, but Carsten points out that most manufacturers are focusing on automation at levels 3 and 4, where human involvement and engagement is still necessary. “We should be talking about increasing levels of automation, rather than driverless cars,” he adds.

The reasons for concentrating on lower levels of automation are complex, with the technological challenges only some of those that need to be tackled. A lack of agreed standards, legislation and regulation are all potential stumbling blocks as well as more fundamental questions about how much power and responsibility we’re willing to give to a computer.

On the technology side, BMW’s senior engineer for automated driving and a key member of the team developing the iNext, Dirk Wiselmann, has an extensive list to cover that includes handling the enormous amounts of data processing, safety, the need for constant 5G connectivity and the task of engineering vehicles that are – by today’s standards – supercomputers on wheels.

However Dan Galves of Israeli tech company Mobileye, whose driver assistance technology is used by over 25 automobile manufacturers, disagrees. He believes there are only three challenges to crack before we see automated cars: sensing, mapping and driving policy.

Packing the power of 150 MacBook Pros into the size of a lunchbox, Nvidia’s Drive PX2 may deliver 1.5 teraflops of processing power every second, but it’s still not enough to deal with the complexity of inner-city driving on its own. “There are simply too many variables to hand-code for every eventuality a car may encounter out on the roads,” says Philipp Graf, Nvidia’s senior director of automotive, EMEA.

To create an Advanced Driver Assistance System (ADAS) capable of safe driving in real-world conditions, the automotive world is turning to AI and a process called ‘Deep Learning’. Nvidia’s Drive PX2, operating with the company’s DriveWorks OS, enables the calibration of the car’s sensors and controls the acquisition of surrounding data. It’s able to manage the synchronisation, recording and processing of this data.

The information gathered is then fed through to a supercomputer that can be used to train a deep neural network located in the cloud. Combined with HD maps, the vehicle can use this data, and the information learned – and continuously updated – from other vehicles around the world to help it plan a safe and convenient course through obstacles. 

In dealing with the complex and dynamic environment on the roads, the ability of AI technology to learn rapidly is widely considered to be the only solution to safe automated driving. Tesla announced in October 2016 that Nvidia’s Drive PX2 AI computing system was being fitted into all vehicles produced from that date. AI is already part of Volvo’s recently launched autonomous vehicle trial, which will involve 100 drivers using the cars on public roads in Gothenburg, Sweden. Ford is also using the technology, and Audi has claimed it will be the basis of its Level 4 automated vehicle the German giant hopes to launch in 2020. 

While self-learning technologies like AI offer huge possibilities, Carsten is keen to point out we need to exercise caution. “Vehicles and manufacturers need to demonstrate that they have learned correctly,” he says. “They will inevitably need to hard-wire certain features like conforming to speed limits.”

To navigate our streets safely, the computers in automated vehicles rely on a detailed HD render of the environment far in advance of that provided by GPS. Understandably, creating a 3D dynamic model of the road environment is not a simple task, as Mobileye’s Galves explains. 

The company’s Road Experience Mapping (REM) system is already capable of generating an HD map through crowd-sourcing data from the camera-equipped ADAS vehicles, but Galves claims it will take until at least 2020 to generate enough mapping data to create maps with the required level of detail. The process is already happening, with Mobileye partnering with General Motors and VW to harness their existing on-board cameras for data capture.

The REM system navigates by pinpointing what Mobileye describe as landmarks – buildings, traffic signs and even road marks. These can be used to precisely identify the location of the vehicle, providing an accurate 3D map for the processor even with patchy bandwidth.

It’s a similar process to that used in the HERE map, which will be used in BMW’s iNext car. Nvidia’s HD mapping technology is already being introduced (although not fully functional) in new Tesla models. Its proprietary ‘structure-from-motion’ algorithms take data from multiple cameras on the vehicle, which can then be converted into detailed 3D maps.

Used in combination with the car’s inertial sensor, GPS data and cameras the system can precisely position key landmarks. The car’s own AI supercomputer and the cloud-based Visual Simultaneous Localisation and Mapping (VSLM) technologies work together to help the vehicle navigate a safe course. 

At CES this year it was clear that it’s technology companies like Mobileye, Nvidia, HERE and others – including the UK’s Oxford Technical Solutions –  that are developing the intelligence to enable automation. In many cases, the automobile manufacturers are applying technologies they do not control. It’s a different situation from the OEM manufacturing relationship that has previously existed.

In another interesting potential shift in the power dynamic, these tech companies have loose affiliations, and are comfortable with their technologies being used by multiple manufacturers, claiming this helps to speed up the development process.

“Human activity and real life is complex and messy, full of unexpected and uncoordinated events that happen at a moment’s notice,” claims Maarten Sierhuis, a director at the Nissan Research Center in Silicon Valley. While organisations like Google press ahead with aspirations for full automation, Nissan is taking what Sierhuis claims is a more ‘human-centred’ approach. The company is developing a hybrid approach to automation, which it has labelled as ‘Seamless Autonomous Mobility’ (SAM), designed primarily for fleets of commercial vehicles. Whereas in other systems machines make a decision, in SAM, it’s actually a human – but not necessarily the one in the car.

“When an autonomous vehicle comes across an unexpected obstacle, vehicle sensors respond,” says Sierhuis. “All vehicles in the fleet are under constant remote visual supervision by a fleet manager, who temporarily takes control of the vehicle, plotting a course around the incident.” Once the obstacle has been successfully navigated, this new course shared across the networked fleet. In a fascinating paradox the computer is intelligent enough to realise when it can’t cope with a situation.

The Nissan approach is based on a fundamentally different perception of automation that challenges many of the assumptions made by others. “We developed SAM because the domains in which we will employ autonomous vehicles will always need some human-to-human communication, coordination, cooperation, and most of all, collaboration,” Sierhuis says. Nissan is banking on the fact that the sort of automation envisaged by many isn’t just impractical, it’s also potentially unsafe. “We believe that we will always need a human in the loop,” adds Sierhuis.

This view is supported by emerging evidence that increasing automation may affect our ability to perceive and deal with hazards. In research conducted into the hazard perception of drivers in automated vehicles, Carsten suggests the move to automation poses a fundamentally new question for drivers.

When driving a manual car, drivers need to decide when to act. In automated vehicles (or those equipped with some degree of autopilot system) they need to decide both if and when.

The issue was brought into sharp focus in May 2016 when a Tesla car with the AutoPilot system engaged ploughed into the side of an 18-wheel truck and trailer at 74mph, killing the driver instantly.

While the company has posited a number of theories on the reasons why the crash may have happened – and has subsequently updated its AutoPilot software – it has been keen to avoid taking responsibility for the incident, pointing out that the driver in the car could have taken charge at any time.

In its Pathways to Driverless Cars consultation document, the UK’s Centre for Connected and Autonomous Vehicles has set out its approach to welcoming driverless cars onto the streets. The consultation clarifies the government’s view that, at all times, the vehicle’s driver will remain responsible through the journey – for both legal and insurance purposes.

Rather than reducing the burden of insurance, the consultation proposes that compulsory requirements are extended for automated vehicles with a policy covering the manufacturers’ and other entities’ product liability, as well as injuries to third parties. Automated vehicles will be classified differently, which is likely to see more expensive premiums until the technology has proved to be safe.

The Government, keen perhaps to make the UK an attractive test-bed for automotive organisation in the post-Brexit world, is proposing an agile system where technologies like ADAS systems are assessed as they come to market. It is hoped that when Level 3 and 4 technologies hit the mass market around 2020, the UK will have a legal and regulatory framework to accept them.

The consultation goes some way to clarifying the position of automated vehicles on the road, but there’s very little information on what standards apply inside the vehicle.

There’s a grudging acceptance that manufacturers will need to coalesce around a series of standards before automated cars are introduced more widely. “We can’t afford a proliferation of symbols and information across hundreds of different vehicles makes and models,” says Carsten.

Currently all vehicles adhere to ISO 2575:2010, Road vehicles – Symbols for controls, indicators and tell-tales. Issued in 2010, the standard contains 350 universal symbols and their descriptions. Across the world there will need to be a similar agreement about every aspect of the human interaction with automation. 

In the past, manufacturers may have been trusted to establish their own standards and frameworks, but following the VW emissions testing scandal, this no longer likely. However, creating an ISO standard could take years if things run smoothly and potentially decades if they don’t.

It’s clear that automation at least to Levels 3 and 4 will be on our streets potentially within the next five years, and certainly within a decade. Among industry insiders, it’s generally accepted that we are unlikely to see Level 5 driverless vehicles on the streets of our cities any time soon.

The claims of notoriously disruptive technology companies keen to stay in the headlines, like Google and Tesla, should be treated with caution. Whispers about Apple’s entrance into the market remain, but without firm evidence it could just be an elaborate PR stunt to help maintain the perception of the company at the cutting edge of technology.

Sierhuis believes there are even deeper questions we need to ask ourselves as a society. “Ultimately the question is whether we want intelligent autonomous systems without ever having any human involvement,” he says. At Nissan they believe that answer is no. Even the introduction of Nissan’s hybrid approach is some way off “Our vision is full deployment of SAM by 2040,” he says.

Unencumbered by such weighty philosophical issues, BMW’s Wisselman agrees. “The requirements for a worldwide launch will still be decades away.”

However, for a man who has spent his life attempting to visualise the future for automated travel, Rinspeed’s Frank M Rinderknecht is more ambitious: “On some dedicated (maybe even geo-fenced) stretches of highways 2020 should be a realistic date,” he declares.

From then on, he believes progress will be incremental as the technology improves and gains regulatory and public acceptance. Across the world, however, the question of when and where is still anybody’s guess.

Trailblazers: technology leaders compared

Google – Waymo

Since 2009, Waymo vehicles ( the new name for Google’s driverless car project) have completed over 2 million miles on the road, with fewer than 20 reported accidents. In 2017 100 new automated vehicles will be rolled out for testing in four American states.

Waymo may claim to be focused on building its own vehicle capable of Level 5 automation, but experts have suggested the tech giant may more interested in developing the technology and software used within automated vehicles.

In 2009 a top-end lidar cost $75,000, but the tech giant claims to have cut this cost by 90 per cent.  The increasing adoption of AI and the enormous storage and processing power Google already possesses could also be put to new use.


Since 2014 Tesla cars have driven over 222 million miles in AutoPilot mode, the company’s automated motorway assistance system. In October 2016 the firm said all new cars would ship with hardware that, theoretically, makes Level 5 automation possible. Tesla CEO Elon Musk has claimed that by the end of 2017 the company would have produced a car that can drive itself from Los Angeles to New York City.

The company spends a modest $700m a year on R&D (compared to VW’s $10bn), but few would bet against Tesla being one of the first to achieve Level 5 automation – although all agree 2018 is highly unlikely.


The world’s biggest company has always been secretive about its new products, but the rumours about its involvement in driverless car technology continue. Steve Jobs may have suggested Apple’s interest in the market back in 2008, but it was only when the tech giant sent a letter to the US Highways Regulator in late 2016 that this was confirmed.

Apple’s R&D spend has increased from $3bn to $10bn, suggesting the firm is working on something big, but it’s still unclear whether this will be a driverless car or not.

Those looking for clues online will find very little, but that’s the way Apple wants it.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles