Money & Markets: Without AI regulation, we’re at the mercy of the chronic algo-holics
Engineers have to abide by stringent regulations when they build bridges, cars or skyscrapers. Yet when programmers write algorithms, they can do what they want – with sometimes serious consequences. Isn’t it time they were regulated too?
To be a functioning member of a modern society is to be highly regulated. There are hundreds of thousands of pages outlining what is okay for us to do and what actions will get us fined or thrown into gaol. Markets are driven by a ‘winner takes all’ logic, and companies are lured by this logic too. Regulators set boundaries on acceptable behaviour in order to protect the common good. Of course they do.
Yet, this is not specifically the case for the autonomous actions of computers.
While governments legislate freely over where we can park, the husbandry of chickens we can eat, and the time when the sun rises and sets, they seem curiously straitjacketed over the consequences of modern technology.
Financial markets are relieved about that. They love the ‘hot new thing’ and don’t want it to be ruined by rules. Artificial intelligence (AI) is one such darling, but it is taking a bad turn and needs rules, fast.
Here is a harmless example to illustrate a broader problem:
I have a YouTube channel. It’s all part of my personal R&D. YouTube, which is owned by Google, is known for its algorithms, which select search results and promote related content for you. Google’s environment can create a situation where the content you implicitly want is exactly the content you get, and a circuit of algorithmic curation feeds back to the user to create a self-designed, Google-constructed content prison. Inside, all your prejudices and world views are reinforced forever.
Google is said to be worried about this. YouTube and Google create a personalised information bubble, a walled garden for your mental model, trapping you in an algorithmically generated simulation of reality created by your prejudices.
So, according to Google algorithms, my YouTube videos are suitable for the women over 45 and men over 35 that the algorithm allows to discover my content. It is, in effect, a censorship for those outside of that group, shadow-banned by the algorithm.
I understand why. The algo says women under 45 aren’t really into my content, so we should show them other stuff. Sounds fair, right? Wrong. That is illegal in both the UK and US. It’s sexist and ageist.
Put it like this. “Don’t offer that young woman an opportunity to apply for an engineering course; most women don’t like engineering much, so don’t show her that stuff at all, show her nutritionist courses; women like domestic science more.”
That is effectively the structure this algorithm is applying, and one would not seek to express those prejudices or apply that discrimination in the non-computing world.
It is an algorithm’s basic function to learn to discriminate. But in such a feedback situation, unbounded machine learning will reinforce the very biases that societies are fighting against.
However, the problems are much broader. For example, what about the lauded software AI in the self-driving cars we’re all going to be ferried around in within a generation? If there are three people in a car driven by AI, destined to either die in a head-on collision or survive by taking out a nearby motorcyclist, what does the software do? Is anyone checking and justifying what it will do under these kinds of circumstance? What if the AI thinks there is a 25 per cent chance of there being a motorcycle in the lane? Does it calculate the outcome and decide who might live or die? What is the accountability for the result?
Is it okay for Apple’s Siri virtual voice assistant to castigate you for swearing and comment: “I hope you don’ t kiss your mother with that mouth”? Is it all right for video-streaming service Netflix to troll its own viewers with the tweet: “To the 53 people who watched A Christmas Prince every day for the past 18 days: Who hurt you?”
Such things might seem like small beer, but pile things like this one on top of the other and you end up with a dystopia worthy of Hollywood.
Operant conditioning is how you brainwash mammals, and more and more algorithms are programmed to do that. Algorithms that break the law and set out to harm people are increasing fast.
If you don’t think this is real, play a mobile game implicitly wanting to charge you $100 a day to play and feel yourself being hooked by their operant conditioning algorithms. Then watch algorithms stalk you around the internet offering you similar pleasures.
There is no algo rule-list of dos and don’ts, no Asimov ‘laws of robotics’ to protect us from programs that are designed to prey on us, judge us, categorise us, discriminate against or hurt us. The markets might prefer it that way but it’s unlikely to go well.
It might not be the end of humanity that women under 45 are shadow-banned from my treasure-hunting videos, but it clearly illustrates a giant toxic undercurrent sweeping us to a world where algorithms are curating what we believe, trust and rely on, in an utterly unregulated, unexamined and barely understood way. Unchecked, it’s a predator’s paradise and a bleak prospect.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.