Robots taught to recognise abstract commands
Image credit: Dreamstime
In a step towards fluid communication between humans and robots, researchers at Brown University have improved the ability of robotic systems to understand and follow spoken commands, whether highly specified or abstract.
A robot’s ability to respond to casually spoken orders could make them invaluable assistants in industry, hospitality, healthcare and many other sectors. However, at present, these systems tend to require clearly spoken, specified instructions without ornaments such as metaphorical language.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” said Dilip Arumugam, a graduate researcher at Brown University.
Abstract commands may imply sub-steps required to complete an order; for instance, a robot ordered to “grab” an object will need to move towards the object, grasp it, and lift it. Being unable to determine the specificity of a command – as well as the appropriate action – could prevent robotic systems from over- or under-planning tasks.
The new system developed by Arumugam and his colleagues at Brown University analyses language in order to infer a level of abstraction.
“That allows us to couple our task inference as well as our inferred specificity level with a hierarchical planner, so we can plan at any level of abstraction,” he said. “In turn, we can get dramatic speed-ups in performance when executing tasks compared to existing systems.”
The researchers developed their model using a virtual task domain called Cleanup World. This consisted of colour-coded rooms housing a robot and chair. Volunteers watched the robot complete tasks, and suggested how they would phrase the command for the task.
The instructions ranged from highly specified directions to abstract, high-level commands such as “Take the chair to the blue room”. Using the range of instructions given by the volunteers, the researchers trained their system to differentiate between high-level, medium-level and low-level abstractions and, consequently, how to infer actions appropriately.
The researchers found that their system allowed virtual and real Roomba-like robots to identify the specificity of instructions successfully; this allowed it to respond to commands within a second 90 per cent of the time. When no level of abstraction was inferred, half of the tasks took longer than 20 seconds to plan.
“We ultimately want to see robots that are helpful partners in our homes and workplaces,” said Professor Stefanie Tellex, a computer scientist at Brown University. “This work is a step toward the goal of enabling people to communicate with robots in much the same way that we communicate with each other.”