The final frontier

The landing of NASA's Phoenix craft on Mars on 25 May shows that robotics are to play a leading role in space exploration in the future.

The President of the UK's Royal Society, Lord Rees of Ludlow, recently told the BBC that Europe should give up on sending men and women into space and focus instead on unmanned missions.

He said funding manned exploration is best left to US space agency NASA, whose budget this year of £9bn ($17bn) far outstrips the European Space Agency's £2.5bn ($5bn) and dwarfs the UK's expenditure of £220m ($430m).

"Manned missions are hugely more expensive, and the practical case for sending people weakens with every advance in robotics and miniaturisation," said Lord Rees, who is also the Astronomer Royal.

"For historical reasons connected with superpower rivalry, space is one arena where America and Russia have a bigger budget for space than Western Europe," he said. "Whereas in everything else, Western Europe is fully a match for the US and we can be more effective in space if we focus all our budget on robotics, miniaturisation and fabrication, and avoid manned spaceflight."

No preference

He has a point, particularly given the disparity in space budgets, but the reality of space exploration is that there is a case for both manned and unmanned missions - and always will be.

As Dr Ayanna Howard, a space robotics expert at the Georgia Institute of Technology in Atlanta, US, explains: "It depends on what the objectives are. If it's to land on an asteroid then it should definitely be an unmanned mission, but if it's colonising Mars or discovering bio signatures on other planets, you need humans as input."

Yet it's unmanned, robotic missions that are making most of the running and grabbing the headlines at the moment - from the Hubble telescope and the International Space Station's Canadarm/Dextre system, closer to home, to the Mars Exploration Rovers 'Spirit' and 'Opportunity' and the Cassini-Huygens mission to Saturn and its largest moon, Titan. And with the landing on Mars of NASA's Phoenix craft on 25 May, on a mission to investigate the planet's potential habitability, space robotics looks set to continue to play a leading role for some time.

These missions give a good illustration of the two main types of robotics applications in space - arm-based systems for moving objects and mobile robots for exploration, whether surface or aerial.

Introducing robots

In general, all space robots are essentially similar to their terrestrial counterparts in that each has a main controller, sensors, actuators, a power supply and radio communi-cations. And if it's a flyby or orbital probe then it will also need some form of attitude control in order to point the craft reliably and accurately, and keep it stable, through a system of gyroscopes and devices called reaction wheels - electrically driven flywheels used to aim a spacecraft in different directions without firing rockets or jets. This is particularly useful when the craft needs to be rotated by very small amounts, such as keeping Hubble pointed at a star or Cassini pointing at Titan.

But the similarities end there, for obvious environmental reasons. Dr Howard says: "Broadly, the primary differences between the control systems in space robotics and those in terrestrial robots are that mission requirements can be stringent and drive development of the control system, and individual control systems are designed to achieve specific functional capabilities."

She says the principal stages of designing the controllers include developing an accurate model of the system, constructing for mission-specified operational ranges and software and functional testing. "The testing involves integrating available belief states about the environment - if the mission's to another planet, for example, factors such as rock density and surface friction characteristics - and ensuring the system works at the extremes of the mission's constraints," she explains.

The particular requirements for space missions run deeper than this though - down to the very materials used in a robot's components and body. Space technology company EADS Astrium has played a central role in missions such as Cassini-Huygens, the ESA's Mars Express orbiter and the upcoming ExoMars lander.

Robotics expert in its mission systems division Dr Elie Allouis says: "One of the key challenges of using robotics technology in space is what's called the 'outgassing' of materials. This is where Earth-made materials - even metals and glass - contain minute traces of gases that can then be released when they're used in a vacuum, leading to degradation of the material."

Outgassing can also lead to condensation on optical elements or solar cells, which obscures them. This problem affected the narrow-angle camera on the Cassini-Huygens probe and was only corrected
by repeatedly heating the system to 4°C. For some years now, NASA has maintained an online list of low-outgassing materials to be used in space.

Temperature control

"Another major challenge are the very steep thermal gradients a robot can experience in space - if one side of it is in the Sun and the other side is in shadow, you can have a temperature difference of more than 200°C. So you need an efficient thermal control system," Dr Allouis says. "And there's the radiation too, especially in low-Earth orbit where it can corrupt electronic signals."

Such extreme conditions naturally mean many of the components have to be bespoke - although that's not to say every new project is designed from scratch. Dr Ralph Cordey, business development manager for space science and exploration at EADS Astrium, says: "We look to use some control systems elements from one project to another. It's a mixture of the new and the proven."

And Dr Howard says: "Off-the-shelf parts can be used to build robotic devices that are functionally the same, for testing and algorithm development here on Earth, but the actual flight hardware has to use space-qualified parts, such as radiation-hardened electronics. In essence, most of a robotic device that is actually deployed into space is specialised."

Specialised they may be, but the components are not necessarily state of the art. Cassini, for example, was designed in the late 1980s/early 1990s, yet it didn't actually get to Saturn until 2004. And the computer in both Mars Exploration Rovers, which were launched in mid-2003 and landed on the red planet in January 2004, has an onboard memory that includes what are by current standards a modest 128Mb of RAM and 256Mb of flash memory.

But these time lags in the use of technology are inevitable, not only because of the timeframe between design and launch - although that's getting shorter all the time - but because the technology itself takes time to develop. "Selecting technology for a mission can take four or five years," says Dr Cordey. "On Earth you can change your computer every few years, but you can't do that in space - and you also have to make sure it will survive in space.

Mission dependent testing

Which brings us to the testing - and more testing. The nature of the tests will of course depend on the nature of the mission. Commonly, one of the most important is a solar thermal vacuum test - which, as its name suggests, involves putting a craft or components in a vacuum chamber and subjecting them to simulated solar heat and radiation - but they may also include testing against corrosion and for low-g or zero-g conditions, using a gantry-based gravity compensation set-up or even a swimming pool.

It's accepted that this part of the development process is difficult, and that test procedures can always be improved. Dr Howard recently sat on a US National Research Council committee that was asked to review NASA's Exploration Technology Development Programme, and she says its report found that: "In a number of areas, mission-critical tests - that is, a system/subsystem model or prototype demonstration in an operational environment - are not included in the programme, usually as a result of a lack of time (scheduling) and/or funding to carry out necessary flight tests or to develop needed test facilities."

But as Dr Cordey points out: "You can't simulate everything - ultimately, you just have to get up there. It's a question of asking what the key tests are, and a case of gathering more information about an environment to feed back into test procedures."

Despite every care, as we know, things can still go wrong - the Mars Polar Lander in 1999, the Contour comet flyby mission in 2003 and Beagle 2 in 2004 being recent notable examples - and once the robot's in space or on a moon or another planet, it's clearly no easy task to fix a fault. So to minimise the chance of failure the systems are designed with redundancy and fault recovery.

Dr Allouis says, "Dual redundancy is built-in - two systems or even, in the case of the current Mars Explorer Rovers, two machines. We look at a system in terms of gradual degradation, and design it such that even if part of it fails it can still carry out some functions."

For example, the gyroscopes can fail one at a time through wear, but this can be compensated for by the remaining gyros. And, says Dr Howard, "You can design a robot with six independent drivable motors, for example, so that if one of the motors fail, you still have five that are functioning."

Finding alternatives

Fault recovery, by contrast, means designing alternative solutions into the system. The robot might carry spare motors, say, so that if one of them fails it can be replaced. Cassini, for example, carries a spare reaction wheel.

There is also sometimes the option of uploading a software 'patch' to correct a problem, but this brings with it the issue of communications and the difficulty of maintaining them effectively over such huge distances.

It takes at least 11 minutes for a signal to reach Mars, and about an hour-and-a-quarter to reach Saturn, making direct control impossible. For real-time control you need a response time of about 0.2 seconds, says Dr Allouis, so it's almost possible close to Earth. But Dr Howard says it's out of the question even for planned missions to the Moon. "Although the time delay between Earth and the Moon is only about two-and-a-half seconds, it's been shown that a robot on its surface could still not be reliably controlled if it was travelling faster than about 0.2mph," she says.

So robots for space applications are designed with a number of autonomous behaviours. Navigation is obviously essential, whether it's for flybys or surface exploration, but autonomy also allows robot systems to carry out tasks such as perception processing, understanding the operating environment, motion planning and control, attention allocation, anticipation and activity planning.

Insect-like

Many mobile space robot controllers use a layered system of control modules based on insect behaviour. As the software is being developed, layers of behaviour-generating modules are added one at a time, each of which connects sensing to action. The modules all run in parallel whenever triggered by the relevant sensors.

To prevent conflicts arising between behaviours that could be triggered at the same time, the modules are organised into a hierarchy. Higher-level behaviours can temporarily suppress lower-level ones, but when the higher-level behaviours are no longer being triggered, the lower-level ones resume control.

The Mars Exploration Rovers are a good example of this. When they are given a command telling them to go to some point, they evaluate the terrain using stereo imaging to choose the best way to get there while avoiding any obstacles they identify.

Their auto-navigation system takes pictures of the nearby terrain, after which 3D terrain maps are generated by the rover software. Traversability and safety are then determined from the height and density of rocks or steps, any excessive tilts and roughness of the terrain. Dozens of possible paths are then considered before the rover chooses the shortest, safest path toward the programmed point.

Taking this concept of autonomy further is the use of artificial intelligence (AI). For example, the huge volumes of data about the surface of Mars the ESA's Mars Express orbiter has been sending back has until recently been at the command of human-operated scheduling software - a tedious and time-consuming task that never eliminated the occasional but permanent loss of precious information.

So AI researchers at the Institute for Cognitive Science and Technology, in Italy, and mission planners and computer scientists at the ESA's European Space Operations Centre developed a tool called MEXAR2. This considers the variables that affect data downloading - including the overall science observation schedule for the rover's instruments - then projects which on-board data packets might be lost because of memory conflicts. It then optimises the data download schedule and generates the commands needed to implement the download.

MEXAR2 has halved the mission planning team's workload, and because it optimises bandwidth used to receive data on Earth, it has allowed the ESA to free up ground station time for other missions.

The reason this was possible - and successful - is that the rover was designed to have a greater capability than was actually needed. "It's a case of now we've fulfilled the mission requirements, let's try some autonomous behaviour," says Dr Allouis. "Also, not all robots are designed for a particular function, so they can be adapted for new functions."

Autonomous construction

The growing use of AI is certainly a major strand for future developments, with robots performing a widening range of tasks too delicate, too dirty or too dangerous for humans. Dr Howard says: "I believe that, ten years or so from now, space robotics will be capable of autonomous construction of habitats and space structures, autonomous science exploration - 'go forth and discover, then phone home when you find something' - and in human assistance, as in acting as a partner that helps the astronaut in achieving their mission objective, such as fetch-and-carry."

And Dr Allouis says: "In-space assembly is certainly one interesting aspect - especially of spacecraft, where it could be done in such a way that makes them sterile so they don't contaminate the target environment. And for human assistance there are efforts to make robotic systems at last as dextrous as humans."

Dr Howard backs this up, saying: "Human-robotic systems are one of the few technologies that is aligned with future missions. In fact, in our NRC report, it was stated that 'the technology is generally robust to changes to the architecture, for example in exploration missions to Mars'."

But as well as advanced robotics and fabrication, Lord Rees talked of a third development - miniaturisation - and on this Dr Alloius has a prediction. "I think miniaturisation will continue to the situation where we could be using swarms of small or even nano-robots to scout for resources on the Moon or Mars," he says.

The basic rule for any design - robotics or otherwise - is that form follows function, but whatever form they take and task they perform, the use of robotics in space will grow in line with our aspirations for exploration. And as Dr Howard says, that is something we will never lose. "It's human nature to explore, and robots should function as the vessels to help us do this, just as ships allowed earlier explorers to travel across the seas. I think if we give up on that dream we're giving up on who we are."

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close