AI and ethics

Is AI good or bad – and who decides?

Image credit: Alamy

Deployment of AI is pushing ahead faster than the ethical framework needed to control it.

One of the most frequently cited technology historians, Professor Melvin Kranzberg, was a major proponent of the law of unintended consequences. So much becomes obvious in his original 1986 paper, at the point where he expands on how he coined the first of his own Laws of Technology.

“I mean that technology’s interaction with the social ecology is such that technical developments frequently have environmental, social and human consequences that go far beyond the immediate purpose of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.”

Going further, Kranzberg observed that many technology-related problems arise when “apparently benign” technologies are introduced at scale.

Kranzberg died in 1995 and, for him in his time, an example of this phenomenon was DDT – in one context, a pesticide with dangerous side effects; in another, an important weapon to curb the spread of malaria. It is easy to see why his thinking is being revisited today to address and explain concerns about social media and, increasingly, the potential impact of ubiquitous or near-ubiquitous forms of artificial intelligence (AI).

For AI in particular, Kranzberg’s First Law is thought to provide some of the foundations for the creation of an ethical infrastructure around which regulation, and – where that is not practical – a set of benchmark customs and practices, can be built. It crisply highlights the dilemmas to be resolved at the heart of what even activists see as a potentially positive and transformative technology.

The ‘at scale’ deployment of a number of AI-based technologies has already raised concerns that are fundamentally ethical.

Infamously, Twitter’s darkside was able to turn Microsoft’s machine-learning chatbot Tai into a vehicle for racist sludge within a day of its launch in 2016.

Three years ago, YouTube attempted to bring AI to bear on the running sore for social media that is content moderation, only for its system to delete and restrict content posted by legitimate news organisations as well as fake news and mendacious propaganda.

More recently, there have been issues concerning evidence of racial bias within the computer vision algorithms used by facial recognition systems, including some that have reached advanced trials and even deployment by law enforcement. Flaws have ranged from wrongful identification to a simple failure to recognise and identify individuals with darker skin tones. They are documented thoroughly in the Netflix documentary ‘Coded Bias’, which focuses largely on the work of MIT Media Lab scientist Joy Buolamwini, founder of the Algorithmic Justice League. More notably, they have garnered global attention among the matters being raised by the Black Lives Matter movement after the murder of George Floyd.

However, tensions around AI and ethics and the risks in releasing technology that is not sufficiently mature or appropriately tested (and thus vulnerable to Kranzberg’s diagnosis) truly became news in their own right last December, when Timnit Gebru, technical co-lead of Google’s Ethical Artificial Team, resigned. Her departure was followed in February 2021 by the dismissal of the other co-lead, Margaret Mitchell. Both women are highly respected in the world of AI and ethics.

Gebru says she resigned over Google’s initial insistence that she remove her name and those of all its other employees from a technical paper, ‘On the Dangers of Stochastic Parrots’, and subsequent refusal to say why and by whom objections had been raised.

Mitchell, who along with Gebru was a major advocate for diversity within Google and had also spoken out about restrictions on publication, was, Google says, dismissed for violations of the company’s security policies and code of conduct.

The paper raises flags over the massive-scale models (from BERT through lately to Switch-C) now proving increasingly popular in natural language processing, and sets out four main areas of concern.

These include the possibility that the models may be so big as to be unmanageable, thus still allowing elements of racial, gender and other types of bias to slip through unnoticed, and that the energy and environmental costs of training models of that may also outweigh their benefits, while alternative strategies are not receiving enough attention. Those wishing to dig further into the paper and its context can watch the video below to see the full presentation by co-author Professor Emily M. Bender, courtesy of The Alan Turing Institute's YouTube channel.

The scandal – and outside Google, few in the AI and ethics community regard it as anything else, with Gebru and Mitchell commanding close to total support – has foregrounded a whole set of issues around the topic.

The paper itself was described by one senior academic as “a series of not-unexpected or especially controversial conclusions based on the direction research has been taking”, and even Google appears to have been willing to see its publication provided those names were removed.

With what drove the decisions that sparked the departures still unclear (or at least subject to question), the AI community has seen the rise of beliefs that the incident is indicative of dangerous tensions between Google’s product team leaders and its AI researchers (accelerating the risk of Kranzberg’s ‘at scale’ concerns), that it again highlights Silicon Valley’s toxic ‘bro culture’ and accompanying disregard for ethnic and gender diversity, and that, for all its massive financial and technical resources, Google simply could not handle having technologies it has invested heavily in and the green aspects of its ownership of huge server farms subject to scrutiny and internal criticism. It could well be ‘all of the above’.

On one level, Parrotmuzzle shows how issues within the AI and ethics realms do extend beyond the dilemma stated in Kranzberg’s law. Specifically, how can a company be expected to implement technologies ethically, if its own conduct and culture appear to fall short of what are now considered – in the West, at least – reasonable ethical norms? Google, it should be said, is hardly the only technology giant of which this question has been asked recently.

It also illustrates a potentially valuable meeting point between ethics – a subject far too broad to cover here entirely – and the regulations and customs that governments and an increasing part of civil society is looking for.

For example, you can read the paper as raising the need to stress-test datasets, both in and of themselves and against other possible formats. That starts to describe a path towards the promotion of best practice (always also a useful form of industry self-regulation) as well as the formal creation of benchmarks that should ensure an AI released for use satisfies regulatory due diligence. Certainly, many AI researchers note that the datasets now being used often demand far more examination than may initially be assumed and, more importantly, far more than they receive.

The wider world of AI ethics research is already seen as having influenced regulation, with the EU’s Artificial Intelligence Act one of the latest examples. It sets four levels of risk – ‘Unacceptable’, ‘High’, ‘Limited’, ‘Minimal’. In the ‘unacceptable’ category are, for Europeans anyway, such concepts as the use of AI towards social scoring, manipulation, and surveillance (excluding law enforcement). In this category, ethics has obviously been a strong influence, but in the ‘high’ category too caution is required over biometric IDs, education and training, access to employment and more.

Is this really as much as ethics can or should be inserting into the process of creating a new AI-centred world?

Ultimately, this is all happening at a very high level and also tends to impose a utilitarian construct on a discipline that is evolving and – in its need to explore as well as confirm social norms – necessarily amorphous. Responding to a recent Pew Research Centre analysis of ethical AI, Barry Chudakov, principal at Sertain Research and one of today’s leading commentators on the intersection between technology and consciousness, said, “Our ethical book is half-written. While we would not suggest our existing ethical frameworks have no value, there are pages and chapters missing.”

That same Pew project found that across a survey which admittedly appears to have been more confined to the great and the good than a scientific sample, only 32 per cent of respondents believed that “ethical principles focused primarily on the public good will be employed in most AI systems by 2030”; 68 per cent did not.  

That was not simply down to more philosophic observations such as Chudakov’s. In a written response, Mike Godwin, of Godwin’s Law fame and former general counsel of the Wikimedia Foundation, made some worrying familiar observations. “The most likely outcome, even in the face of increasing public discussions and convenings regarding ethical AI, will be that governments and public policy will be slow to adapt. The costs of AI-powered technologies will continue to decline, making deployment prior to privacy guarantees and other ethical safeguards more likely,” he wrote.

“The most likely scenario is that some kind of public abuse of AI technologies will come to light, and this will trigger reactive limitations on the use of AI, which will either be blunt-instrument categorical restrictions on its use or (more likely) a patchwork of particular ethical limitations addressed to particular use cases, with unrestricted use occurring outside the margins of these limitations.”

 The irony here is that as much as Kranzberg’s First Law is useful as a device for prompting why ethical valuations of AI should be carried out now and acted upon while the technology is still in relative infancy – and there are signs that is happening – it is a ‘law’ precisely because it expresses something that has repeatedly happened throughout the history of technology.

Godwin’s observation and the experiences of Gebru and Mitchell suggest it will prove a hard law to break – even though the effort might well be worth it. Then, maybe we should also recall Kranzberg’s fourth law: “Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.”

 

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles