AI tool detects disinformation websites at registration
Researchers at University College London have led the development of a machine-learning tool which identifies domains registered to promote disinformation, allowing for platforms to take immediate action against bad actors.
Governments, regulators, and social media platforms have struggled to manage the proliferation of misinformation and disinformation online, in large part due to the sheer speed with which it can spread to millions of people. Professor Anil Doshi and his colleagues decided to develop an early detection system, which could potentially help nip disinformation in the bud.
The tool identifies disinformation sites based on their domain registration; details in the registration information, such as whether the registering party is kept private, can be used to determine whether a website may be established for nefarious purposes.
“Many models that predict false information use the content of articles or behaviours on social media channels to make their predictions. By the time that data is available, it may be too late,” said Doshi. “These producers are nimble and we need a way to identify them early.
“By using domain registration data, we can provide an early warning system using data that is arguably difficult for the actors to manipulate. Actors who produce false information tend to prefer remaining hidden and we use that in our model.”
Doshi and his colleagues applied a machine-learning algorithm to domain registration data. Based on these training data, the classifier was able to identify 92 per cent of the disinformation domains and 96.2 per cent of the legitimate information domains established in relation to the 2016 US presidential election, ceasing operations after the election.
The researchers propose that the tool could be used to help platforms, policymakers, and regulators accelerate processes to improve monitoring of disinformation campaigns, and warn, sanction or shut down bad actors.
“Fake news, which is promoted by social media, is common in elections and it continues to proliferate in spite of the somewhat limited efforts of social media companies and governments to stem the tide and defend against it,” Doshi said. “Our concern is that this is just the start of the journey. We need to recognise that it is only a matter of time before these tools are redeployed on a more widespread basis to target companies, indeed there is evidence of this already happening.
“Social media companies and regulators need to be more engaged in dealing with this very real issue and corporates need to have a plan in place to quickly identify when they become the target of this type of campaign.”
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.