Illustration of inside view of an autonomous vehicle

Human feedback helps software detect ‘blind spots’ in self-driving car responses

Image credit: Dreamstime

Research from the Massachusetts Institute of Technology (MIT) have shown that ‘blind spots’ in the artificial intelligence (AI) of self-driving cars could be corrected using input from humans.

Collaborating with tech giant Microsoft, the MIT team developed a model that first puts an AI system through simulation training before putting a human through the same scenario in the real world. As a result, the AI learns any changes in behaviour it needs to make as it observes the human under the scenario.

So far, the system has only been tested in video games. However, study author Ramya Ramakrishnan, said: “The model helps autonomous systems better know what they don’t know.”

Furthermore, Ramakrishnan, who is also a graduate student in MIT’s computer science and artificial intelligence lab, added: “Many times, when these systems are deployed, their trained simulations don’t match the real-world setting and they could make mistakes, such as getting into accidents.

“The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”

During the study, the team used the example of a driverless car system not knowing the difference between a white truck and an ambulance with its lights flashing, only learning to move out of the way of an ambulance after receiving feedback from the human tester.

Furthermore, the researchers also used an algorithm known as the Dawid-Skene method in their tests, which uses machine learning to make probability calculations and spot patterns in scenario responses that can help it to determine whether something is truly safe or still contains the potential for some problems.

According to the scientists, this method is used to avoid the “extremely dangerous” situation of the system becoming overconfident and marking a situation as safe despite only making the correct decision 90 per cent of the time – instead, it will be aware of the final 10 per cent and look for any further weaknesses the system may need to address.

“When the system is deployed into the real world, it can use this learned model to act more cautiously and intelligently,” Ramakrishnan said.

“If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution.”

On Monday, a new survey by the AA found that more than 21,000 of its members would want driverless cars to prioritise saving the lives of children over themselves in a crash.

Also, on 15 January 2019, automaker Volkswagen announced a collaboration agreement with US-based Ford Motor Co to focus on the development of electric and autonomous vehicles.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles