Chef preparing a salad

Neural network predicts the future, learns to prepare salad

Image credit: Dreamstime

Researchers based at the University of Bonn have developed software capable of predicting what will happen minutes into the future, using the example of salad preparation.

“We want to predict the timing and duration of activities minutes or even hours before they happen,” said Professor Jürgen Gall of the University of Bonn’s computer vision group.

This could allow robots to work helpfully alongside humans by equipping them with the ability to anticipate actions which come so naturally to humans. For instance, a robotic butler could pass its human overlord items of clothing as they get dressed, or help them run a bubble bath by learning from example.

Teaching machines these skills is a major challenge, incorporating computer vision, machine learning and robotic movement and dexterity. This field remains in its early stages. Gall and his colleagues have taken a step forwards, however, in developing machine-learning software capable of teaching itself to estimate the timing and duration of activities minutes in the future.

In order to teach the software to predict what would happen in the future, the Bonn scientists provided it with 40 videos of people preparing salads, totalling approximately four hours of footage. The salads differed and required an average of 20 different actions. By processing just four hours of videos, the machine-learning algorithm was able to ‘understand’ which actions followed others during salad preparation.

“Then we tested how successful the learning process was,” said Gall. “For this, we confronted the software with videos that it had not seen before.”

These new videos depicted different salads being prepared. In order to test the algorithm’s ability to anticipate future actions, the researchers showed it the first minute or two or the videos and required the algorithm to predict what would happen through the rest of the preparation process.

“Accuracy was over 40 per cent for short forecast periods, but then dropped the more the algorithm had to look into the future,” Gall said. For activities more than three minutes in the future, the algorithm could predict future actions with 15 per cent accuracy.

Given that the researchers required the algorithm to suggest the correct activity and timing in order to be considered accurate, and in some cases required it to recognise steps in the footage itself rather than explicitly telling it, this performance is a step forward for predictive algorithms.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close