Autonomous cyber weapons no longer science-fiction
The actual development and deployment of Autonomous Intelligent Agents and their cyber space variations are just around the corner.
"When a bomb starts talking about itself in the third person, I get worried." When the crew of USS Voyager (a vessel, by the way, which herself sported bio-neural circuitry...) had their encounter with Dreadnought, a sentient weapon of mass destruction bent on fulfilling its mission, the idea of Autonomous Intelligent Agents (AIAs) in military use was still quite disturbing. One could think that by the late 24th century Lt. Tom Paris should have been used to it though, for 300 years earlier, in the early 21st century, the world was already very close to the actual development and deployment of genuine AIAs.
Now is the perfect time to have a look at what real-life AIAs might be like.
But what do we mean by Autonomous Intelligent Agents? "Autonomy" means different things for different people. While it is probably not hard to imagine agents operating in the physical world - after all, drones are all over the news and we have encountered AIAs aplenty in science fiction, from Voyager's Emergency Medical Hologram on the goodies' side to 2001's 'HAL serving the baddies - Autonomous Cyber Weapons are still mysterious objects (or creatures, if you wish). They live, operate and die in cyberspace, their missions ranging from information gathering to disruption of enemy networks and infrastructures.
Agents "living" both in the physical world 'let's call them robotic agents' and Cyber agents need to be able to interact with their environment via appropriate sensors and actuators. Sensor inputs for robotic agents include cameras, laser ranging, GPS receivers and more conventional instrumentation.
In cyberspace, a purely software environment, external influence can be achieved on the lower level by accessing, modifying or deleting data and files.
Autonomous agents use external inputs to build an internal representation of the environment they operate in, and to have a "world view" of their surroundings and their place in it continuously updated. This knowledge allows the agent not only to operate in unknown circumstances and be free of rigid pre-programming, but also to assess where it is in relation to the goal or target and plan how to reach it.
A software intelligent agent is able to acquire by itself information about its target network, starting from the outside and finding points of vulnerability, then autonomously developing the method of penetration. Once past the perimeter, it is capable of building its own representation of the network topology, hardware resources, applications, even user profiles.
Most importantly, the ability to learn new information and integrating it in the "world-view" means that even the goal itself is subject to constant revision. Take the famous Stuxnet worm, for instance: while this is arguably the most sophisticated cyber weapon known to date, its programming is quite rigid and its target specific: a Siemens 6ES7-417 or 6ES7-315-2 CPU, with Profibus CP 342-5 communications processor installed, according to a report by Symantec.
Had the worm been confronted with a different brand of PLC or even a different Siemens product or network, it would have been stopped right in its tracks. A genuine AIA would have been able to recognise its target regardless of specifics. For this to be possible, goals for AIAs need to be formulated not in rigid, hard-wired code, but in a flexible way - in as natural a language as possible: "disrupt the centrifuges" or something along these lines (we can only speculate on the actual purpose and objective of Stuxnet as there is still no general consensus about it).
Lastly, in our list of characteristics, an AIA's action should be "sustained in time" - another trait that Stuxnet does not possess.
It is probable that the development of genuine AIAs for military or intelligence purposes is now rather advanced, but secrecy shrouds the final products (if any).
"I believe autonomous weapons are clearly here already", says Jason Healey, former White House policy director and current director of the Statecraft Initiative of the Atlantic Council. "The definition of AIA, I'm sure, is accurate for how an AI specialist would define them, but from an operational military perspective, Stuxnet has already crossed the line in ways I don't think we can ignore.
"If its masters wanted to stop this automated destruction, they would have had very limited options to communicate [with it] nor was there a human "on the loop" who could hit the equivalent of a self-destruct switch. In the military, that is a more than reasonable expectation of an autonomous weapon."
In line with this definition, no true AIA is known today, neither robotic nor cyber. However, robotic self-driving cars, like those developed at Stanford University and later by Google, come very close to it. Stanley, a pimped-out Volkswagen Touareg SUV, won the DARPA challenge in 2005, a contest organised - not coincidentally - by the US Department of Defence's Advanced Projects Agency where autonomous cars covered over 140 miles in the desert, on a course revealed only hours before the race.
Stanley's successor, a Passat called Junior, won an urban environment challenge in 2007. Since then, Google self-driving cars have logged tens of thousands of miles and seem to be close to going commercial. Self-driving cars are obviously not weapons, but their less-than-peaceful applications are easy to imagine, from patrolling hostile areas to delivering supplies to military operations.
There is an on-going debate on the nature of autonomous weapons, with mixed views and opinions, but both military planners and political leaders now face exactly the same Dreadnought challenge: how to stop or recall an autonomous system that decides by itself to operate above and beyond its original instructions? These considerations must be taken into account because Captain Kirk won't always be around to tell off a misbehaving computer, as so often happens in 'Star Trek'.
Taking a step back, let's try to systematise this menagerie of intelligent agents into a taxonomy. On the first level of classification, we have already seen robots and cyber agents. Within these classes, a further classification based on two coordinates can be made, with the role of the agent on one side and its shape on the other.
Based on their role, autonomous agents can be employed in intelligence-gathering or in actual military operations, the main difference lying in the destructive nature of military operations while intelligence-gathering usually does not cause damage to the targets and, in fact, tries to avoid detection in most instances.
Looking at architecture, autonomous agents can be either monolithic or decentralised. Monolithic agents constitute a single piece of software or a single robot, while decentralised intelligent agents are systems where the intelligence is distributed among many simpler components - they are all similar or very similar, acting in concert, like a flocks of birds, with the advantage of being arguably more resilient to disabling efforts and counterattacks. A botnet made up of agents instead of conventional malware software, for instance, would not possess a central point of control that could be disabled.
Scaling down technology
Getting a little more into the tech detail, we can see that most of the "building blocks" necessary to field a genuine Autonomous Intelligent Agent are quite well developed. On the hardware side, miniaturisation is the key word. Small, nimble robots, especially if they fly, need lightweight propulsion, with high power-to-weight ratios, such as in brushless electric motors.
Nanotechnologies are enabling the creation of nanosensors - e.g. inertial navigation systems as small as a nail - and that will probably allow for a breakthrough in chip manufacturing.
High computing power and memory space are much-needed commodities for artificial intelligence software and knowledge systems. Special importance must be given to vision sensors, image processing capabilities and image interpretation.
Software is the heart and soul of intelligent agents, and the field of artificial intelligence has a long software history full of unfulfilled promises.
At the moment, the convergence of several models and algorithms, together with growing computational power, brings us very close to the goal, especially in the cyber warfare and intelligence domain, where national, political and economic stakes are very high, and so are budgets. Cyber offence has always been more feasible economically than defence, and the emergence of AIAs will render it even more lucrative.
Software building blocks of agents include low-level robust and resilient control models for robots: high-level planning algorithms used for plotting the best path to reach the goal and for mapping the world.
Online learning models supply new knowledge to the planning functions, so as to allow continuous updates - here an old but recently revamped learning model, Neural Networks, plays a big part. Interfaces, based on Natural Language Processing will provide the means for commanders to interact with the agent: not so hard for a robot perhaps, but a clear operational challenge for a software agent deployed in cyberspace.
It is interesting to consider how software Autonomous Intelligent Agents could be employed. While most public debate about cyber warfare policy and strategy has traditionally concentrated on defence, the protection of sensitive information is equally important. An offensive stance, for instance, is easier to adopt if the perceived costs in terms of casualties, as well as monetary and political, are very low compared to other forms of warfare.
Under cyber attack
All offensive operations begin with reconnaissance, and this first phase of a cyber-attack will arguably provide an ideal arena for the deployment of autonomous agents in the near future, at least in two directions: automatic discovery of technical vulnerabilities in target systems or networks and, on a higher level, intelligence gathering.
The discovery of vulnerabilities in the target network, and the development of practical means of leveraging them (called 'exploits'), are essential. Currently, these are manually developed by skilled personnel or acquired on the market and incorporated in the programming. A software autonomous agent will automatically survey the target, locate vulnerabilities and develop a means of exploiting them. The agent will then act on the information gathered during reconnaissance and use it to plan the infiltration path.
One of the possible methods makes use of 'trees' - mathematical structures commonly used to represent AI problems - to model the possible alternatives in a cyber-attack. Future agents will conceivably be able to build ad hoc tree representations of possible infiltration routes on the fly, apply the appropriate techniques to plan and execute them. Internal representation of the environment and possible threats from defenders will make it possible for autonomous agents to be much more 'persistent' than the advanced persistent threats (APTs) known today by allowing them to react to countermeasures.
For agents tasked with intelligence-gathering, this will mean more time in which to work. Similarly, agents tasked with disruption will enjoy much more flexibility in selecting specific targets (applications or systems) and the appropriate means of attacking them. The selection of specific databases or documents to retrieve once the agent gains access, would be achieved through AI techniques that can extract and process information even from unstructured data.
The scenarios for robotic agents can range from the simple replacement of manned weapons systems such as aircraft or tanks, suitable for the bigger monolithic robots of which current prototypes the American X47-B and the European project Neuron are the forerunners (the X47 recently set a record as the first robotic aircraft to land on a carrier autonomously), to more unconventional reconnaissance tasks, for which smaller and more agile systems could be suited.
Commanding ethical behaviour
The advent of Autonomous Intelligent Agents will almost certainly have quite an impact on strategy, whereas there will most likely be little change on the legal side, i.e. in how autonomous weapons will be regarded by the body of International Law regulating cyber conflicts.
Agents will, of course, have to comply with the law and some areas are particularly critical. For instance proportionality, as stated in International Humanitarian Law, is a criterion for the legality of military attacks that should comply with the provision that "the harm caused to civilians or civilian property must be proportional and not excessive in relation to the concrete and direct military advantage anticipated by an attack on a military objective".
This kind of critical decision is the responsibility of the military commander who is given guidance from legal advisers. However, when in the field, commanders have often had to make judgements without pre-planning or guidance.
Professor Michael Schmitt, a prominent expert on Cyber Conflict International Law,'says: "I am worried about the ability of the systems to do proportionality analysis. How could it do subjective estimates of military advantage?" At some point, designers of AIAs will have to confront this problem and incorporate some form of "ethical sub-routine" into the agents' programming.
Closely connected to the question of how to ensure AIAs behave ethically is the problem of command and control. Orders, given to an agent, should be as clear and unambiguous as possible.
Ultimately, all of the precursor technology for true intelligent agents is already in place and it is a question of when they will be integrated into a feasible final product.
While we can think of peaceful tasks for AIAs, such as search and rescue in dangerous places or automated cyber security, it's most likely that military and intelligence applications will be developed first. As the saying goes: "If it can be built, it will be built." It is essential then that we find ways of coming to terms with all of the resulting implications.
Engineer Alessandro Guarino is an experienced information security professional and independent researcher. He was one of the main speakers at the recent International Conference on Cyber Conflict in Tallinn, Estonia.
What is an Autonomous Intellgent Agent?
1. An agent interacts with its environment, via appropriate sensors providing input from it and appropriate actuators allowing the agent to act and influence that environment.
2. An autonomous agent acts towards a goal - it has an 'agenda'. In particular, an autonomous agent developed for warfare operations is assigned a target.
3. The activities of a truly autonomous agent are sustained 'over time', so it must have a continuity of action.
4. An autonomous agent should possess an adequate internal model of its environment, including its goal together with some kind of performance measure or utility function that expresses its preferences.
5. An agent must possess the capability to learn new knowledge and the possibility to modify over time its model of the world and possibly also its goals and preferences.
(Adapted from A. Guarino, "Autonomous Intelligent Agents in Cyber Offence" in "Proceedings of the 5th conference on Cyber Conflict", 2013)
|To start a discussion topic about this article, please log in or register.|
"The Internet of Things used to be a buzz phrase in tech circles, but it's already so last century. Brace yourself for the Internet of Everything"
- Dot Testing [04:46 pm 04/12/13]
- English legislation for Electrical design [03:53 pm 04/12/13]
- Can we make further productive use of Wylfa Magnox Nuclear Station site at Angelsey? [01:51 am 04/12/13]
- What to Specialise in Electronics Engineering?? [03:37 pm 03/12/13]
- Earth fault Current - Current Transformer [12:53 pm 03/12/13]
Tune into our latest podcast