The camera allows robots to get a more accurate understanding of their surrounding environment

Camera-equipped robot hand demonstrates spatial awareness

A camera attached to a robot's hand has been shown to give it superior awareness of the space around it due to the rapid creation of a 3-D model of its environment.

Before a robot arm can reach into a tight space or pick up a delicate object, the robot needs to know precisely where its hand is.

Doing so with imprecise cameras and wobbly arms in real-time is difficult, but researchers at Carnegie Mellon University's Robotics Institute have shown that they can significantly improve a robots accuracy by incorporating the arm itself as a sensor using the angle of its joints to better determine the pose of the camera.

As sensors have grown smaller and more power-efficient, placing a camera or other sensor in the hand of a robot has become a realistic proposition.

According to professor Siddhartha Srinivasa, who worked on the project, this allows greater spatial awareness because robots ‘usually have heads that consist of a stick with a camera on it’ and therefore cannot bend over like a person to get a better view of a work space.

A popular solution for environmental 3D mapping by robots is using a technique known as simultaneous localisation and mapping (SLAM), in which the robot pieces together input from sensors such as cameras, laser radars and wheel odometry to create a 3D map of the new environment and to figure out where the robot is within that 3D space.

"There are several algorithms available to build these detailed worlds, but they require accurate sensors and a ridiculous amount of computation," Srinivasa said.

But the placement of the camera on the arm allows the robot to know where its hand is relative to objects in its environment.

The SLAM algorithms typically assume that little is known about the pose of the sensors, but when the camera is mounted on a robot arm, the geometry of the arm will constrain how it can move and is therefore predictable.

"Automatically tracking the joint angles enables the system to produce a high-quality map even if the camera is moving very fast or if some of the sensor data is missing or misleading," said Matthew Klingensmith who also worked on the project.

The researchers demonstrated their system for SLAM using a small-depth camera attached to a lightweight manipulator arm. In using it to build a 3-D model of a bookshelf, they found that it produced reconstructions equivalent or better to other mapping techniques.

"We still have much to do to improve this approach, but we believe it has huge potential for robot manipulation," Srinivasa said.

Development on robots with improved human ergonomics has recently yielded some promising results. A robot hand that can perform extremely dexterous manoeuvres and learn from its own experiences was revealed by University of Washington researchers last week.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close