Google, Facebook and other internet video services have quietly started using automation to remove extremist content from their sites, according to two people familiar with the process.
The move is a major step forward for internet companies eager to eradicate violent propaganda from their sites and who are under pressure to do so from governments around the world as attacks by extremists proliferate, from Syria to Belgium and the United States.
YouTube and Facebook are among the sites deploying systems to block or rapidly take down Islamic State videos and other similar material, the sources said.
The technology was originally developed to identify and remove copyright-protected content on video sites.
It looks for ‘hashes’, a type of unique digital fingerprint that internet companies automatically assign to specific videos, allowing all content with matching fingerprints to be removed rapidly.
Such a system would catch attempts to repost content already identified as unacceptable, but would not automatically block videos that have not been seen before.
The companies would not confirm that they are using the method or talk about how it might be employed, but numerous people familiar with the technology said that posted videos could be checked against a database of banned content to identify new postings of questionable content.
The two sources would not discuss how much human work goes into reviewing videos identified as matches or near-matches by the technology. They also would not say how videos in the databases were initially identified as extremist.
Use of the new technology is likely to be refined over time as internet companies continue to discuss the issue internally and with competitors and other interested parties.
Currently, most companies rely on users to flag content that violates their terms of service, and many still do. Flagged material is then individually reviewed by human editors who delete postings found to be in violation.
In November, the internet hacking group Anonymous began identifying the social media accounts of Islamic State (IS) sympathisers resulting in many of them being shut down. However, this human approach is time consuming and unreliable.
In late April, amid pressure from US President Barack Obama and other US and European leaders concerned about online radicalisation, internet companies including YouTube, Twitter, Facebook and CloudFlare held a call to discuss options, including a content-blocking system put forward by the private Counter Extremism Project, according to one person on the call and three who were briefed on what was discussed.
The discussions underscored the central but difficult role some of the world's most influential companies now play in addressing issues such as terrorism, free speech and the lines between government and corporate authority.
None of the companies have, at this point, embraced the anti-extremist group's system and they have typically been wary of outside intervention in how their sites should be policed.
The companies currently using their own automation systems are not publicly discussing it, two sources said, in part out of concern that terrorists might learn how to manipulate their systems or that repressive regimes might insist the technology be used to censor opponents.