Google is 'promoting hate speech', claims internet law expert
Image credit: One of the offending automatically completed searches. Credit: David Erdos
David Erdos, deputy director of the Centre for Intellectual Property and Information Law, reports disturbing results served up by the Google search engine’s auto-filler algorithms.
Google has been promoting hate speech and incitement to violence via its autocomplete search service, internet law expert Erdos claims.
Erdos, deputy director of the Centre for Intellectual Property and Information Law and a lecturer in law at Cambridge University, told E&T the Silicon Valley giant had recommended “Queers should be shot” when the first two words of this sentence were typed into its search box.
Up until a few days ago, whenever a user keyed in the word “Apostates”, followed by the first two letters of the word “should”, the search engine typically brought up the result “Apostates should be killed”.
Violence towards illegal immigrants was also “promoted” by Google, according to Erdos.
“Illegals should be shot on sight” and “Illegals should be shot” were both near the top of the list of searches automatically filled in by Google’s secret algorithms.
This continued to be the case until the company eventually took action after being notified about the disturbing phrases.
Lists of autocomplete sentences, understood to be based partly on the popularity of particular sequences of words searched for by internet users worldwide, has previously led to controversy.
Google was forced to alter the way the system worked in 2016 after it transpired that typing the phrase “are Jews” returned the suggestion “evil”.
Erdos said he flagged the latest examples to Google, but it took more than a week before the search engine stopped serving up these phrases at the very top of its list of suggestions.
The Cambridge academic was searching using incognito mode on Chrome, so the results he got appear to be the defaults delivered to the generic Google user anywhere in the world.
Though computers are morally neutral, they reflect the programming and information fed into to them by humans, so it could be argued that it is people who have led the machines astray.
However, this has not stopped humans from losing patience with internet firms; the latest revelation about Google coincided with the results of a study which found a clear majority of British people now want tougher regulation of tech giants.
The 2018 Edelman Trust Barometer, the results of which were published today, found trust in social media had fallen to a record low and one in 10 youngsters has quit Facebook in the past year.
Towards the end of 2017, appalling phrases linked to child abuse started appearing in the autofill search bar of YouTube, the video-sharing platform owned by Google.
Variations on the sentence “How to have sex with your kids” began appearing when users typed in “How to have”.
At the time, YouTube said it was examining its auto-fill features in response to what it has called these “profoundly disturbing” results. The company was also criticised for failing to prevent predatory comments and accounts from targeting children, which led to several big brands including Mars, Adidas, BT, Deutsche Bank, Lidl and Cadbury pulling advertising from the YouTube site.
Erdos said it could be argued that Google had been promoting racist phrases, potentially in defiance of UK hate speech law.
“My personal opinion is that at minimum they should use geolocation blocking as well as badged domain sites to comply with local law,” he told E&T.
He had earlier tweeted: “’Apostates should be killed; ‘Illegals should be shot’ - +1 week since I flagged these to Google, its search engine is still promoting this hate speech and incitement to violence. What happened to ‘Do no evil’?”
A Google spokesman said: “Autocomplete predictions are algorithmically generated based on users’ search activity and interests. Users search for such a wide range of material on the web - 15 per cent of searches we see every day are new - and because of this, terms that appear in autocomplete may be unexpected.
“We do our best to prevent offensive terms from appearing, including by tightening our content policies and introducing new feedback tools for users, and we’re always working to improve our algorithms.”