Feeling the way

Simplicity provides the focus for a new generation of electronic robots as researchers try to work out how little they can get away with in terms of sensors.

Simplicity provides the focus for a new generation of electronic robots as researchers try to work out how little they can get away with in terms of sensors.

At the Artificial Life XI conference in Winchester, UK, last month, engineers from Roke Manor Research demonstrated Dora, a robot that dispenses with expensive - albeit accurate - laser sensors and makes do with a single camera. Developed by Estelle Tidey of Roke, Dora has to work out where obstacles are by moving around the room.

A single image provides little context for an image. The robot can pick up edges easily enough using conventional image processing. Where the robot's processing goes one stage further is to use motion to continually adjust its view of the world. It goes from having just a list of points on a flat space to a 3D 'point cloud’ that it forms into a world view of the various objects in the area. A box or a table leg becomes a blob of points that describes places where Dora cannot go.

According to Tidey, the point clouds can be denser and allow the reliable detection of bland features such as walls that might trip up camera-based sensors that do not use motion to try to resolve what is in frame.

The the robot's software uses the point cloud to build a map of the room, divided into 2D grid cells. Obstacles appear as black cells, floor space as light grey. Roke is working to improve the software as the map of free floorspace is often smaller than reality: points in the cloud that are in the wrong place can narrow the gap between actual obstacles, trapping the robot.

Although the main applications for a robot like Dora lie in civilian rescue and military reconnaissance, the use of relative motion and the parallax changes that causes in the images it sees shows how industries like movie special effects and machine vision are moving together (see 'Fix it in post', p36). At the Free University of Brussels, PhD student Christos Ampatzis and colleagues are looking for something equally minimal. They are using simple actuators and sensors to let robots signal to each other and, ultimately, cooperate on tasks.

"When you get robots to attach to one another, you can go beyond the capabilities of a single robot. One might fall into a hole. By attaching to others, it can get to the other side. We are trying to find the minimal conditions for self-assembly," says Ampatzis. "We are asking: do we need very complicated information, or not?"

The robots have simple infrared and proximity sensors and can signal to each other using noises or a ring of LEDs around their midriff. The Brussels researchers couple those facilities with neural network software running on a 400MHz ARM processor so that the robots can learn how to signal to each other. A team from the University of Southampton set out to build the cheapest robot swarm they could in another attempt to experiment with inter-robot communications. In these robots, which cost less than £25 apiece, their bodies are PCBs, the vibration motors from mobile phones provide them with movement and they look at their surroundings using infrared sensors.

Alexis Johnson of the University of Southampton explains: "When we started this project, we were warned against doing swarm robotics because they are expensive. So we designed them to be cheap. They don't have chassis: they are fully assembled by the PCB manufacturer."

The Southampton team started off with foraging experiments, using brightly coloured tokens as food. "Now, we are spreading tasks within a swarm," says Johnson, as they work out how simple cooperating robots can be.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close