Welcome Your IET account
Crossroads of a busy city centre

Autonomous vehicles get help to safely navigate tricky crossroads

Image credit: Li Kang Long | Dreamstime.com

Researchers from the Massachusetts Institute of Technology (MIT) and Toyota have developed a new model which alerts driverless cars when it’s safest to merge into traffic at crossroads with obstructed views.

Navigating crossroads can be dangerous for autonomous vehicles and humans alike. According to a 2018 US Department of Transportation study, in 2016 around 23 per cent of fatal and 32 per cent of non-fatal US traffic accidents occurred at crossroads (aka 'intersections' in official US documents).

Automated systems that assist driverless cars and human drivers to steer through intersections require direct visibility of objects which they must avoid. However, when their line of sight is blocked by buildings or other obstructions, these systems tend to fail.

To overcome this challenge, the researchers developed a model that instead uses its own uncertainty to estimate the risk of potential collisions or other traffic disruptions at such crossroads.

The system weighs several critical factors, including all nearby visual obstructions, sensor noise and errors, as well as the speed of other cars and the attentiveness of other drivers.

Based on these measured risks, the system may advise the car to stop, pull into traffic, or creep forward to gather more data.

Model navigating intersection

MIT demonstrates the model which weighs various uncertainties and risks to help autonomous vehicles determine when it's safe to merge into traffic at intersections with object obstructing views.

Image credit: MIT

“When you approach an intersection, there is a potential danger for a collision. Cameras and other sensors require line of sight. If there are occlusions, they don’t have enough visibility to assess whether it’s likely that something is coming,” said Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

She added: “In this work, we use a predictive-control model that’s more robust to uncertainty, to help vehicles safely navigate these challenging road situations.”

The team tested the system in more than 100 trials of remote-controlled cars turning left at a busy, obstructed crossroads in a mock city, with other cars constantly driving through the cross street. These experiments involved fully autonomous cars and car’s driven by humans but assisted by the system.

In all cases of the experiment, the researchers found that the system successfully helped the cars avoid a collision from 70 to 100 per cent of the time, depending on various factors. This differs to similar models implemented in the same remote-control cars which sometimes couldn’t complete a single trial run without a collision.

The model is specifically designed for road junctions in which there is no stoplight and a car must yield before manoeuvring into traffic at the cross street, such as taking a left turn through multiple lanes or roundabouts.

As part of their work, the researchers split a road into small segments, which helped the model determine if any given segment is occupied to estimate a conditional risk of collision.

Autonomous cars are equipped with sensors that measure the speed of other cars on the road. When a sensor clocks a passing car travelling into a visible segment, the model uses that speed to predict the car’s progression through all other segments.

Furthermore, a probabilistic 'Bayesian network' also considers uncertainties – such as noisy sensors or unpredictable speed changes – to determine the likelihood that each segment is occupied by a passing car.

However, because of nearby obstructions, this single measurement may not be sufficient enough. For example, if a sensor cannot ever see a designated road segment, then the model assigns it a high likelihood of being occluded. And from where the car is positioned, there’s increased risk of collision if the car just pulls out fast into traffic.

This, therefore, encourages the car to creep forward to get a better view of all occluded segments. As the car does so, the model lowers its uncertainty and, in turn, the risk.

Even if the model does everything correctly, there’s still human error to consider, so the model also estimates the awareness of other drivers.

“These days, drivers may be texting or otherwise distracted, so the amount of time it takes to react may be a lot longer,” said Stephen McGill, a research scientist at the Toyota Research Institute (TRI). “We model that conditional risk as well.”

Such conditional risks depend on computing the probability that a driver saw or didn’t see the autonomous car pulling into the intersection.

To do this, the model looks at the number of segments a travelling car has passed through before the crossroads. The more segments it had occupied before reaching the crossroad, the higher the likelihood it has spotted the autonomous car and the lower the risk of collision.

The model sums all risk estimates from traffic speed, occlusions, noisy sensors and driver awareness and also considers how long it will take the autonomous car to steer a pre-planned path through the crossroads, as well as all safe stopping spots for crossing traffic. When combined, this produces a total risk estimate.

That risk estimate gets updated continuously for wherever the car is located at the crossroads. In the presence of multiple occlusions, for instance, the car will creep forward, bit by bit, to reduce uncertainty.

When the risk estimate is low enough, the model tells the car to drive through the crossroads without stopping. However, the researchers found that lingering in the middle of the crossroads for too long also increases the risk of a collision.

Running the model on remote-control cars in real-time indicates that it’s efficient and fast enough to deploy into full-scale autonomous test cars in the near future, the researchers said. Many other models are too computationally heavy to run on those cars.

However, the team added that their model still needs far more rigorous testing before being used for real-world implementation in production vehicles.

At the end of October, MIT developed a system that allows driverless cars to anticipate when vehicles or people are coming around the corner by analysing changes to shadows on the ground.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them