Two gunslingers, gunslinging

Is technology running away from the law?

Image credit: Getty Images

Here’s how we do things in these parts: our laws react to whatever rumpus gets thrown up, day to day. As new technologies ride into town - bringing with them a host of tricksy legal and ethical head-scratchers - is our law kitted out to cope? Or are we going to have to rustle up some new ideas in this here crazy town?

The legal system has always had trouble keeping pace with technology. The introduction of motor vehicles, first in the form of steam engines and then with the petrol engine, shows how politicians can veer from one extreme to another as the market develops. First subject to stringent controls, the trend moved to liberalisation that then led to horrifying accidents and a subsequent tightening. In a world where artificial intelligence, space exploration and biotechnology are all moving quickly, it seems obvious that governments should look hard at what current laws allow and what loopholes have emerged. And though some of the possible new laws seem obvious, there is ready potential for mistakes.

Well-intentioned laws have the habit of being misapplied, especially when combined with technologies that subtly change the nature and reach of communication. Had Paul Chambers [the central figure in the 2010 Twitter Joke Trial] casually joked to friends in a café about heading down to Robin Hood Airport to blow it up, it is hard to imagine anyone turning him in to the authorities. But he used Twitter to send the same message to his followers, believing it to be a joke among friends. The Crown Prosecution Service took it as a threat sent over an electronic channel – an offence under the 2003 Communications Act.

The same act snared self-styled YouTube commentator Mark ‘Count Dankula’ Meechan, who made a video for viewers – and to annoy his girlfriend – of him teaching a dog to make the action of a Nazi salute. The action puzzled a number of comedians and celebrities – some of them had previously supported Chambers in his fight. They used their own social-media accounts to condemn the action, which seemed to them to be a threat to expression and pointed out how they had satirised the Nazis with the same action on broadcast TV. But many of them would be more likely to face issues based on the Broadcasting Act. The British law as it stands does not take into account the constantly changing status of social media which oscillates between private communication and public broadcasting, albeit broadcasting on a small scale.

Are new laws necessary? An alternative approach the UK government has explored is to use incentives rather than punishment, taking a cue from initiatives such as R&D tax credits that were intended to nudge companies into putting more money into research investment. However, in an analysis of whether to offer tax credits to companies that could demonstrate investment in cyber security, the 2016 review found that the likely benefits did not justify the costs.

What if the legal experts make mistakes and uncover new unintended consequences? An EU team looking at the possibility for new laws covering robots came up with the idea of temporary laws because of the rapid pace of technology and the potential that provides for making mistakes.

As the five examples here show, much of the issue is not so much that new laws are desperately needed but that the constant shift in what technology can do and the parallel moves by governments have led to loopholes emerging that will need to be closed, but which can only be adequately dealt with by reshaping the laws already in place.

One common thread is that most of the areas where new laws are needed – or existing ones to be reframed – are international in scope. It does not matter if one country passes a law against dangerous experimentation on bacteria when the consequences are potentially global if the work turns out to be unexpectedly successful. As a result, any useful new laws are more likely to become treaties rather than simply national legislation. That in turn may work against the idea of temporary laws. Treaties, once in place, tend to stick.

A licence for science?

A constant message about technology is: innovation is good - so good that everybody should be doing it. Yet perhaps it’s not such a good thing if your idea of innovation is altering genomes. Professional research teams are generally controlled by institutional regulations, though legislation around the world varies in how tightly this is controlled. And the worst-case scenarios of bioterrorism are also covered by legislation in most places.

However, a biohacking ‘garage biology’ movement is gradually gaining traction outside institutional controls. Biohackers take advantage of the relative low cost and easy distribution of DIY biology tools that are capable of techniques such as gene-editing based on the CRISPR-Cas9 system. You can even buy friends and relatives gift certificates for The Odin’s home gene-engineering kits.

In October 2017 while taking part in an online video, The Odin founder Josiah Zayner injected himself with a CRISPR-based agent meant to increase muscle mass. He did not expect the injection to have a massive effect on his arm. To do that would involve finding an inexpensive way to distribute the agent to more than the tiny fraction of cells in the proximity of the puncture point. His aim, inspired by the 1990s computer-hacking movement, was to demonstrate that self-engineering is in reach. In December 2018 in an interview for online channel ReasonTV, he said: “In the biohacker community, there are a lot of people who have been self-experimenting and trying crazy stuff in that area and I think that’s going to blow up. How can we get this into the hands of the most people so we can help them?”

Organisations such as the US Food and Drug Administration (FDA) as well as those in Europe and China think differently. The FDA quickly issued a warning to Zayner after he went public with his self-experimentation and published a notice that any products intended for use in gene therapy would need to be licensed. But the FDA did not act to close down sales of generic gene-modification kits: existing laws close down most avenues by banning sales of treatments and experimenting on others.

A potentially bigger issue for the existing loopholes governing the use of DIY biology kits lies in agriculture. The Indian government already has extensive experience of this. At the start of the millennium, Indian farmers began planting home-grown forms of cotton that are toxic to bollworms – a genetic modification originally developed and licensed by Monsanto.

As with human modification, laws focus on consumer protection rather than the act of gene editing itself. By reducing the chances of making a profit, the existing laws try to choke off demand for the practice among commercial operators. But risks remain as to whether well-intentioned private experiments may see novel species leak out into the wild.

A key concern of the Indian government was how widespread use of versions of Bt cotton might alter the foodchain – turning secondary pests into more troublesome primary pests or simply leading to a build-up in resistance to the cotton’s toxin. Invasive species provide clear examples of what can go wrong if a species gains a foothold in an alien environment.

In Argentina, polo-horse breeders have turned to cloning, though not yet gene modification, to maintain leading strains of livestock. International laws may make it impossible to sell cloned or manipulated animals, but the practice of home-grown modification can easily upset the balance of sports; they are, though, unlikely to have the same devastating effects as an out-of-control plant or pest. As a result, biotech legislation may have to shift focus and start to clamp down on experimentation that is not performed directly for profit possibly through licensing schemes or by putting local research institutes in charge of managing freelancers. But, by the same token, legislators would need to think carefully about where to draw a line on personal experimentation: would we expect to call the police on kids mixing fizzy drinks and mints without a ‘science licence’?

A law for space exploitation

More than a year before US astronauts left low Earth orbit for the first time on their way to the Moon, their politicians together with those from the Soviet Union and the UK agreed limits on how space could be used. As of 2018, some 130 countries have agreed to the same conditions.

The Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies prevents weapons of mass destruction from being launched into space and prevents nation states from claiming ownership of any part of a celestial body beyond Earth.

The intent was to avoid the situation encountered in Antarctica earlier in the century when various sea-faring countries landed on the frozen continent and imposed a new set of borders – before agreeing to take a step back from attempting to colonise it. The Outer Space treaty prevented a similar land grab during the early days of the Space Race.

However, doubts continue as to whether nation states or private individuals can exploit the resources on the Moon or elsewhere. This nearly got settled in the late 1970s with the Moon treaty, when some governments attempted to create a more stringent agreement that would limit extraterrestrial exploitation. But no one from a country capable of launching rockets into orbit at the time would sign it. France signed it but never ratified. Australia and Kazakhstan are among the very few countries with launch facilities that have agreed to its terms.

In 2015, the US took a firm step in the other direction as companies signalled their interest and growing ability to put their plans into action. President Barack Obama signed into law the Space Act: a law that made it legal for companies such as Planetary Resources and SpaceX to start digging into asteroids for their minerals. Although the US regards mineral extraction as perfectly legal, as with Antarctica what happens in space is not a decision that should be left entirely to individual countries.

In a 2016 speech to coincide with the publication of a position paper by the International Institute of Space Law (IISL) based at Leiden University, the group’s president emeritus Professor Tanja Masson-Zwaan said: “There is no legal certainty about what is allowed and what is not allowed. It’s possible to argue under international law that what is not forbidden should be allowed. The US Space Act is a possible interpretation of the Outer Space Treaty.”

The question is the extent to which other states agree, though the IISL has agreed that the US Space Act is at least a starting point for negotiation. The IISL took the view that an international framework “should enable the unrestricted search for space resources” and give priority rights to prospectors, at least for set periods of time determined by future treaties. At the same time, the committee argued for the use of precautionary principles to avoid contaminating celestial bodies or causing changes in Earth’s own environment – a possible consequence of knocking an asteroid out of a stable orbit and putting it on a collision course. Such use of a mass driver might fall foul of the Outer Space Treaty in any case.

However, one looming concern is whether the original consensus that greeted the Outer Space Treaty is on the point of breaking down. US President Donald Trump’s enthusiasm for a ‘space force’ will, if it comes to fruition, take advantage of the current acceptance of conventional weapons in space. But with Russia claiming to have developed nuclear weapons with hypersonic delivery vehicles that need to operate on the edge of the atmosphere, it now seems governments are willing to push at the boundaries of existing treaties even for weapons of mass destruction.

A law for smart machines

Several years ago, researchers at the Massachusetts Institute of Technology’s Media Lab recast an old philosophical problem for the age of smart machines. The ‘trolley problem’ asks who in various fictional scenarios should be sacrificed for the benefit of others. If a person has the ability to perceive entirely accurately the consequences of their actions, what should they do? Do the needs of the many outweigh the few or are there other considerations? Does one sacrifice several older people to save a single child? And so on.

Last autumn, Edmond Awad and colleagues from MIT described in the journal Nature the wide variation in opinions that seem to derive from culture, social status and age. Participants from the global south, for example, wanted females, the young and those of higher status to be spared if possible. Westerners were more egalitarian but favoured inaction faced with a painful choice.

It will take some time before robots have the kind of foresight to control situations compatible with what local cultures decided should be the right thing to do. The reality is robots are finding it hard to determine what they are really seeing and reacting in a way that society sees as acceptable.

Today’s robot law has adapted to that reality. Conventional health-and-safety laws deal with the robot as just another piece of potentially hazardous machinery. For decades, the answer was to put the robot in a cage and let it get on with things. If the cage opens, the robot is meant to stop whatever it is doing to avoid accidentally injuring a worker.

Whether embodied as autonomous vehicles or as ‘cobots’ that work on production lines alongside humans, robots need better controls to avoid causing injuries. But standards have already been drawn up to control those situations, such as ISO/TS 15066, which simply extends existing safety protocols. But as robots become more sophisticated and capable of more than simply ferrying people around and handing them a spanner, the need for novel legislation becomes more apparent.

There are questions as to who is responsible when a cobot goes rogue: the public is undecided as to who should be held responsible. Is the injury the fault of the designer – who may have been unaware of the types of environment into which the cobot or drone was intoduced – or the robot’s owner? In Japan, the public believes it is the owner who is more culpable, according to research by Takanori Komatsu of Meiji University. But the issue remains one of apportioning liability rather than demanding radical new laws.

Where the existing legislation becomes much fuzzier is in the relationship between the AI that might control the robot on a second-by-second basis and the humans it is meant to serve or work alongside. How much control does a cobot have over a patient in a care home that it is meant to help get to the toilet or get dressed? Similar to industrial cobots, the ISO published a standard covering safety for personal-care machines in 2014. But as robots become more capable of taking on day-to-day care, they will have to make decisions that do not just affect safety but dignity too.

Everything will be fine if the patient is cooperative but in the middle stage of dementia it is hard to predict how people will react to being manoeuvred physically. At what point does it become excessive force? Does a robot have the right to imitate family members or is it duty bound to tell the patient – who may be entirely happy to treat it as a long-lost parent – what it really is? Who is responsible for making that decision: the owner of the care home or the developer or the robot itself? And will it depend on circumstances?

This leads to a parallel issue in AI in general: that of explainability. When the GDPR directive came into force last year, some AI researchers expressed hope that this would drive legislators to force operators to work harder on technologies that would describe to users why their inventions made the decisions they did. In apportioning responsibility for ethical decisions made by machines, explainable AI may be a core requirement. A robot, for example, may be able to point out that a software patch applied by a user to improve its performance in one area had a knock-on effect on other operations – a situation that it would be hard for the original robot maker to control. If the behaviour is emergent, we may have no choice but to make the robot itself responsible.

A law to enforce truth online

Almost two decades ago, a group of technologists published online their manifesto for a world where the internet took centre stage. The Cluetrain Manifesto promised a world where Joe Six-Pack was “no longer part of some passive couch-potato target demographic” but empowered to take control. The authors declared: “The internet is inherently seditious. It undermines unthinking respect for centralised authority, whether that ‘authority’ is the neatly homogenised voice of broadcast advertising or the smarmy rhetoric of the corporate annual report.”

All the public needed was to be themselves: “this is not a how-to book, unless you need a remedial lesson in being human”. In the intervening years, working out how to seem human online has become a major business, with numerous groups exploiting automation to populate the internet with seemingly honest opinions on the shape the world should take but which are driven wholly by the narrow interests of rich donors and governments. These fake-human accounts took advantage of a land grab by social-media companies.

Informed by Metcalfe’s Law – which posits that the value of communications networks increases exponentially with size – companies such as Facebook, Google and Twitter have achieved near-monopoly positions in their chosen sectors of social media by capturing the attention of consumers who, though encouraged to be active providers of content, are more often keen to simply copy and paste articles and video with which they agree – regardless of their factual content.

Early last year financier George Soros, who has become a bogeyman in right-wing circles on social media, claimed: “Social media companies deceive their users by manipulating their attention, directing it toward their own commercial purposes, and deliberately engineering addiction to the services they provide.”

The social-media companies argue they have no position on speech other than it should be free: they are simply taking advantage of a desire to socialise. But people are more susceptible to believable lies than truths with which they do not immediately agree. Professor Thomas Hills of the University of Warwick described in a recent paper ‘The Dark Side of Information Proliferation’ that humanity’s “heightened sensitivity to negative information is part of our evolutionary heritage”. People seize on bad news because it seems more important. Disseminators of fake news rely on this attribute to sway public opinion against their enemies. A vicious circle that combines the recommendation systems used by social-media companies with a tendency to seek out information consistent with current beliefs reinforces the impact of fake news with potentially devastating results.

The internet is not solely responsible for the problem. The Rwanda massacres happened long before social media arrived, and the ejection of the Rohingya minority from Myanmar, though fed by online rumours, could easily have happened without the presence of the internet. But social media and artificial intelligence threaten to combine to make propaganda even easier and cheaper to deploy, with foreign governments easily able to influence events in their neighbours – as demonstrated in the 2016 US elections and in far-right propaganda around Europe.

The key question is how to respond. Libel and slander laws offer some degree of defence against ‘deepfakes’, which are electronically edited videos and audio segments intended to defame victims. But so much of the online propaganda falls outside the remit of personalised defamation. The EU’s response so far is to start work on a code of practice that would put the onus on social-media companies to employ more balanced promotion algorithms.

One possible alternative would be laws to discourage the spreading of falsehoods in the first place, as that would remove the ability of propagandists to skirt around codes of practice operated by major social-media companies. But politicians have long fought against these on the basis of national security as well as personal convenience. There are also practical issues. So much of fake news lies in the domain of speculation and fearmongering rather than outright lies. Could a law be framed to deter this in a practical way?

While the West decides what to do, China and Russia have decided secrecy is the best policy. China already has its Great Firewall, an attempt to control the information that passes into the country. Russia has published draft legislation that, if passed, will see the country develop its own isolatable internet that would stop information leaking in.

A law for reusable technology

Although the UK government among others has acted on the environmental damage caused by single-use plastics, the disposable nature of many of the gadgets in use today represent a large and difficult problem to solve.

So far, legislation has had minimal impact on this issue despite being explicitly designed to encourage recycling and reuse rather than disposal. Initiatives such as the European Union’s WEEE Directive and similar laws produced by US states have merely pushed the problem around.

The collection regime itself is successful, by and large. Often companies take back their e-waste as they are meant to and then pass it on to third parties that claim to be in the recycling business, but these can just as easily hand the problem on to other contractors rather than recycling.

The end point turns out to be a toxic dumping ground in the third world, at least up to the point at which those countries ban further imports.

It is a salutory lesson in how the fine print of legislation can lead to unwanted consequences. But legislation to improve the situation for recycling may be needed to deal with the throwaway nature of many consumer electronic devices.

Although charities such as the Waste and Resources Action Programme (WRAP) are trying to encourage more sustainable, longer-lasting design, there has been a clear trend in design to make it harder to upgrade and repair devices. Phones often have batteries that cannot readily be replaced by users and a sales environment that encourages regular product upgrades. Devices as large as home computers now routinely ship with soldered memory and mass storage.

Safety and security now also factor into design decisions that make it harder for users to make changes to the products they buy, all the way up to motor vehicles. Manufacturers are concerned about the implications of motorists being able to gain access to engine components that are fed by high-voltage, high-current batteries and supercapacitors or to the computer systems used to control the steering and engine.

To avoid the risk, they may insert security lockouts that prevent any attempt to open a manifold or attempts to reprogram the engine management and navigation systems. Similar lockouts now control robot drones to try to prevent them being misused – though such geofencing controls appeared to be ineffective in the recent Gatwick airport drone disruption incident.

A number of groups are lobbying for ‘right to repair’ legislation in order to give users much better control of the products they buy. Incentives to convince manufacturers to incorporate upgradeability and avoid premature obsolescence would further reduce the energy and resource footprint of products. The problem that will face legislators is balancing the right to repair and upgrade, in order to promote sustainability, against the need to prevent users from injuring themselves or each other. One compromise would be to not give users the right to make repairs themselves. Instead governments could force manufacturers to open up servicing by third parties through accreditation schemes and to make sure the cost of joining those schemes is not punitive.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close