
Space: the new AI frontier?
Image credit: Getty Images
In today’s media-rich environment the concept of artificial intelligence is hard to miss, but its role in our space-based systems is easy to overlook. In fact, for some applications, it is already embedded.
In recent years, the concept of artificial intelligence (AI) has emerged from the annals of science fiction into everyday life, as society grapples with a range of technological issues from cyber security to driverless cars. Along with other terms, such as machine learning, neural networks and ‘the Turing test’, AI has become a contemporary media buzzword in both fact and fiction – despite a general lack of understanding of what AI really means to the average citizen of Earth.
Out in space, however, the use of AI is arguably more mature and well-understood, at least for current applications, while its integration into future manned space exploration is pretty much a ‘no-brainer’, along with sibling technologies such as robotics, telepresence and autonomous systems.
Most of us, if asked to consider the link between AI and space, would probably think of HAL, the miscreant computer from ‘2001: A Space Odyssey’, rather than any real-life application. Certainly, it seems to tick the box for any current definition of AI, which typically involves ‘computer systems able to perform tasks normally requiring human intelligence’. The fact that HAL extends its task list to murder, which arguably requires ‘human intelligence’, is another matter entirely.
Further consideration of artificial intelligence in space might dredge up the eponymous Star Trek ‘computer’, which was actually more of an online encyclopaedia, or Red Dwarf’s equally disembodied ‘Holly’ (though perhaps intelligence is a misnomer in this case). Others might remember embodied examples of AI, such as the distracted but amiable C3PO from ‘Star Wars’ or the somewhat less amiable Terminator.
Given the development of computer systems and robotics for domestic applications here on Earth, it would be surprising if crews of future Mars missions were denied similar, but space-qualified, versions of the technologies. In fact, the precursors to these systems are already operating in Earth’s orbit and beyond.
If AI has a role to play in space, one would expect the American space agency Nasa to be involved, if not leading the charge. Indeed, Dr Steve Chien, senior research scientist at Nasa’s Jet Propulsion Laboratory (JPL) and technical group supervisor of its Artificial Intelligence Group, confirms that artificial intelligence “is playing an increasing role not only in our everyday lives but also in the space sector, where AI has the potential to revolutionise almost every aspect of space exploration”.
In fact, says Chien, “AI software has been used to operate the Earth Observing-1 (EO-1) spacecraft for more than a dozen years”. Launched in 2000 and decommissioned in 2017, the EO-1 satellite was designed to demonstrate a number of breakthrough technologies in the Earth observation field. Thus, in 2003, a software suite known as the Autonomous Sciencecraft Experiment (ASE) was uploaded to EO-1 to demonstrate onboard image-campaign planning and targeting using machine-learning techniques such as pattern recognition.
As Chien explains, the ASE software (for which he was principal investigator) enabled EO-1 to analyse imagery onboard, “based on what it saw”, as opposed to relying on human imagery analysts on the ground. For example, it was able to reject predominantly cloudy images, schedule repeat observations for a later date and retarget the spacecraft’s imagers at the appropriate time. “More than 60,000 images have been collected under AI control,” says Chien.
This innovation highlights a dichotomy with space systems, which are seen as ‘high-tech’ but often rely on outdated technology because of the need for high reliability.
Applications of AI
The range of space-related applications that incorporate artificial intelligence is broad and ever growing:
• AI has been used by the Space Telescope Science Institute (STScI) for the long-term scheduling of almost 200,000 Hubble Space Telescope observations since 1993. More recently, Nasa has applied AI to scheduling for other Earth-orbiting telescopes, such as Chandra, Spitzer and Fuse, and for the Lunar Atmosphere and Dust Environment Explorer (LADEE) and the European Space Agency’s Rosetta mission to land a probe on a comet.
• The European satellite operator SES is considering using AI to simplify the operation of its fleet and the “tens of thousands of telemetry signals” received on a continual basis from its satellites. AI and machine learning can be used to prioritise telemetry for human operators, allowing them to concentrate on the most important matters.
• In the Earth imaging field, CosmiQ Works (a laboratory established by US intelligence agencies to leverage the innovation of commercial space start-ups) holds competitions, called SpaceNet, that offer cash prizes for the development of automated methods to detect road networks or other landmarks from high-resolution satellite imagery.
• An AI-based ‘astronaut assistant’ known as CIMON (Crew Interactive Mobile CompanioN) was developed and built by Airbus for the German Aerospace Centre (DLR) and was demonstrated on the International Space Station in 2018.
• In 2017, Deep Neural Networks were trained to classify simulated radio-telescope signals with up to 95 per cent accuracy, which offers a useful tool in the search for extra-terrestrial intelligence (SETI).
While it is easy to contemplate trialling AI on a government-owned satellite, commercial satellite buyers want tried-and-tested systems for their multi-million-dollar investments. Moreover, they must be able to insure their assets to guarantee bank loans and other financing. So, in a market where insurance underwriters are either unwilling to ‘fund research and development’ or charge higher premiums to do so, heritage systems hold sway.
As a result, since the early days of the Space Age, space agencies have launched demonstration satellites, for applications such as communications and Earth observation, before applying or ‘commercialising’ the technologies to privately funded satellites.
The expectation is that AI systems, such as those demonstrated by EO-1, will find their way onto commercial satellites in the near future. Chien sees this as a revolution for Earth imaging systems, which currently rely mainly on ground-based analysis and intervention: “People could interact with spacecraft in a more natural way,” he says, tasking the satellite “from anywhere with the internet” using a smartphone app.
Another innovation demonstrated by EO-1 is Sensorweb, an autonomous network that links “scores of spacecrafts, ground observatories, air and marine assets...acquiring thousands of images without any human intervention”, explains Chien.
The system is used to monitor volcanoes, flooding, wildfires and other phenomena. For example, Chien says by using ground-based sensors controlled from space, they have measured thermal emissions from Mount Etna thousands of times over a dozen years. The numbers prove the concept: a typical, non-AI-based system delivers less than 1 per cent of images with active thermal signatures, while Sensorweb has a hit rate better than 35 per cent.
A goal for the future is to demonstrate the autonomous tasking, by a given satellite, of other satellites in a system or constellation using AI. In other words, based on what the given satellite ‘sees’, it tells the others what to target – the control authority has effectively been transferred from the human to the software system. “That’s the interesting stuff!” declares Chien.
While the concept of orbiting satellites ‘telling each other what to do’ might evoke the likes of the self-preserving Skynet AI in the ‘Terminator’ franchise, it does appear to meet that contemporary AI definition of ‘computer systems able to perform tasks normally requiring human intelligence’.
However, it’s also clear that what we regard as artificial intelligence in one decade may become absorbed – one might say assimilated – into everyday technology in the next. By way of illustration, a decade or so ago, we might have considered the technologies of facial recognition and language translation to be examples of ‘machine intelligence’. Today, however, very few tech-savvy individuals would regard an app on their phone to be a form of intelligence, however clever it seems. The same goes for the animated voice of the satnav in your car.
Although AI as a formal field of research dates back to the 1950s, its true empowerment was – and still is – dependent on improvements in computing capabilities. Also, the terminology develops in tandem: in the 1960s, we had the first ‘expert systems’, then ‘neural networks’ in the 1970s; today, machine learning has become ‘deep learning’ and we are back to ‘artificial intelligence’, albeit in more modern format.
As far as the satellite application of remote sensing is concerned, the use of neural networks as an analysis tool for geographic information systems (GIS) dates back at least to the early 1990s. In a paper from 1993, Graeme Wilkinson of the European Commission’s Joint Research Centre reported: “The number of applications for neural networks in remote sensing and GIS data is growing year by year.” Experimental techniques had already been verified for the mapping of agriculture, forest ecosystems and urban growth as well as cloud recognition.
In simple programming terms, a neural network is based on decision trees, which involve comparing data and asking questions, resulting in an answer related to one branch of the tree or another (for example, ‘is the pixel predominantly black or white?’).
A network ‘learns’ by analysing multiple example data sets and adjusting the ‘weighting’ of its decisions to the point where it appears to have the intelligence to recognise and distinguish certain natural features. A simple example might be the ability to recognise the difference between heathy and diseased crops based on their infrared signatures (a common satellite application).
For the EO-1 spacecraft, the concept of the decision tree was extended to so-called ‘random decision forests’, which enabled image pixels to be classified for cloud screening, the application that allowed the Sciencecraft software to reject the cloudy images and reschedule the observation.
Now that multispectral sensors, or imagers, have evolved into hyperspectral imagers that can distinguish more than 100 individual spectral bands (which of course the human eye/brain cannot), the potential of machine learning becomes clearer in that features and patterns in an otherwise undecipherable dataset become evident.
The classification of individual pixels in images – encompassed by the mathematical technique known as Bayesian thresholding – is something that excites practitioners like JPL’s Chien: “This is the beauty of machine learning,” he says. Such machine learning techniques were used on EO-1 to detect sulfur emissions from the Borup Fjord glacier on Canada’s Ellesmere Island, despite the amounts being vanishingly small and “at the limit of the signal to noise”.
The reason for the interest in this technique beyond the field of Earth science is its application to planetary science and astrobiology – indeed, the title of Chien’s special presentation to the 2018 International Astronautical Congress, in Bremen, Germany, was ‘The growing role of artificial intelligence in space exploration and the search for life beyond Earth’.
According to Chien, “AI is critical to future mission concepts to search for life”, describing a proposed Europa submersible, a submarine designed to operate under the ice thought to cover the oceans of the moon of Jupiter known as Europa. Because it can take the best part of an hour to get a command to a spacecraft at Jupiter (depending on the relative position of Earth), any such spacecraft will need a good deal of autonomy, and it might as well be ‘intelligent autonomy’. So, the chances are that any Europa submersible will not only be responsible for its own navigation, but also for its real-time scientific investigations.
Martian cave explorers
Missions designed for the remote exploration of Martian caves are expected to use cooperative AI techniques. Because cave exploration rovers are likely to depend on batteries, mission durations could be measured in days – which means there isn’t time to wait for instructions from Earth – so the rovers will be designed to be completely autonomous.
Engineers have also proposed the concept of rovers working together as a team, with some driving further into the cave while others remain behind and save their energy for relaying data to the cave entrance and back to the lander. AI would also allow the system as a whole to recover from the loss of a rover by redeploying the remaining assets.
Finding life beyond Earth is one thing, but what about life from Earth finding its way into the cosmos? Astronauts have been confined to low Earth orbit since 1972, when the last Apollo crew visited the Moon, but Nasa – and some NewSpace entrepreneurs – have plans to send people back to the Moon and onward to Mars.
Although Nasa is using AI-based test equipment to prepare its new Orion spacecraft for its forthcoming missions to deep space – and eventually, one hopes, to Mars – when it comes to on-board hardware, we are back to the high-tech and reliability dichotomy.
Firstly, safety is of paramount importance for manned missions, so most systems are required to demonstrate ‘heritage’ (which means they are not the ‘latest model’). Secondly, the spacecraft themselves take many years to design and build, so the design of on-board systems must be frozen often years before first launch; this is why much of the International Space Station is operated from laptops running the latest software, rather than some sort of mainframe. And thirdly, the chips in computers destined for deep space must be radiation hardened, which takes time and money and limits the choice.
That being said, the role of AI in manned space missions is certain to increase as its reliability increases and may even become the norm for those long, boring journeys to Mars when the attention and efficiency of the human crew is bound to decrease.
In the age of the glass cockpit where most instrumentation is virtual – and with the inherent expectation of software updates – improvements or upgrades are a real possibility. As long as the old system is retained as a backup, of course! (Imagine the moment of entry into the Martian atmosphere, seven minutes from touchdown, with the software ‘preparing to update’.)
While AI certainly has its place in space, those intrepid explorers won’t have to worry any time soon about a HAL 9000 saying, “I’m sorry, Dave, I’m afraid I can’t do that”.
Science on Mars
As a result of long signal travel times, the importance of autonomy for spacecraft beyond Earth orbit has long been realised. For example, the Mars rover Curiosity (aka Mars Science Laboratory or MSL) uses AI to target the laser of its ChemCam instrument, which can identify the composition of a rock sample from up to 7 metres away using a technique called laser breakdown spectroscopy.
According to JPL’s Steve Chien, the AI analyses a wide-field image of an area to decide on potential targets, then “points the laser at them and fires it”. The importance of getting this right is self-evident, he adds, “because it’s a very bad day if you shoot yourself with that laser, but a fantastic way to more effectively do science”.
Although the AI is not entirely on its own, because initial camera pointing is determined by ground controllers, it does have a fair degree of autonomy in that software detects ‘candidate rocks’ in the image and conducts ‘target filtering’ according to pre-set properties. The AI then prioritises targets, determines the central aim point and can repeat the process for multiple targets without external intervention.
An even more capable version of the system is planned for the Mars 2020 rover to enhance its autonomy and increase its productivity, according to Chien.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.