Researchers should pay extra attention to ensure advanced robotic systems don’t become too autonomous, removing their own design constraints and preventing people from switching them off, an expert has warned.
In a study published in the latest issue of the Journal of Experimental & Theoretical Artificial Intelligence, American scientist Steve Omohundro said autonomous and artificially intelligent systems of all kinds could easily get out of hand. As the requirements placed on various military and civilian robotic systems force engineers to equip those systems with an increasing level of autonomy, a time may come in the not so distant future when robots will be able to make their own independent decisions.
“When roboticists are asked by nervous onlookers about safety, a common answer is ‘We can always unplug it!’. But imagine this outcome from the chess robot's point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess,” Omohundro said.
Similarly to people and animals, future robots could develop survival instincts, prompting them to act with their own interests in mind. Such robots could easily develop harmful or anti-social forms of behaviour. They could learn how to acquire resources, engage in cyber-crime or improve their own efficiency by removing design constraints.
Control over such systems could be seized by hackers or be affected by malfunctions. Omohundro said it’s not only military killing robots which are dangerous, so are other seemingly harmless robotic systems.
“Harmful systems might at first appear to be harder to design or less powerful than safe systems. Unfortunately, the opposite is the case. Most simple utility functions will cause harmful behaviour and it is easy to design simple utility functions that would be extremely harmful.”
To keep robots under human control, Omohundro proposed a sequence of provably safe systems should first be developed, and then applied to all future autonomous systems.