Invisible Twitter bots may influence dialogue and policy, study suggests

A University of Georgia study into the behaviour of bots on Twitter has found that their hidden influence could reach as far as shaping the news and influencing government policy.

When a story begins trending on Twitter, this could mean that millions of people are getting fired up about an issue. Or it could mean that a swarm of bots are subtly at work.

Bots are simple programmes which carry out specialised, automated tasks such as responding to keywords with automated messages on forums. Sometimes they are clearly marked as bots, but more often they go undetected as non-human.

According to Professor Elena Karahanna, professor of management information systems at the University of Georgia, bots are becoming more widespread and sophisticated, and through their inhuman reach and speed, could be influencing the news without being noticed.

“They spread the word very, very quickly,” she said. “That’s one reason they can become central actors in these networks.”

Creating Twitter traction could help attract the attention of journalists and new protestors, and eventually inspire change in governments and policy.

“When a topic trends on Twitter, chances are a lot of central or well-connected accounts are tweeting about it and perhaps shaping how others react. We found that some of these central accounts are actually bots,” said Carolina Salge, a PhD student and an author of the study.

“Once enough accounts are tweeting about the same thing, that creates buzz, and organisations really respond to buzz.”

Some of the bots studied were deceptive; Ashley Madison, the extramarital dating site, uses alluring “fembots” to message men visiting the site and encourage them to buy membership. Other bots could be put to more constructive purposes, such as those used by Uber employees to raise the profile of their complaints, or bots used to protest corruption.

“We know from prior research that boycotts and protests that attract mainstream media attention are in a better position to get their demands met,” said Salge. “It appears that a lot of movements are using bots to increase awareness of their cause on social media with hopes to be reported by the mainstream media. And if that is indeed the case, it is definitely one way to put pressure on organisations or governments to do something.”

A 2013 federal court ruling in Brazil inspired protests over its leniency on corrupt politicians. Bots retweeting hashtags and news stories were used by protestors to help the issue trend on Twitter.

Bots and “cyborg” accounts – those which post automated and human tweets – occupy an ethical grey area, according to the researchers.

“They may be used to spread fake news, but they may also be used to spread facts,” said Professor Karahanna. “And I think that’s where the ethical line is. If they are spreading the truth, it’s not unethical.”

The spread of fake news and other misinformation on social media has led to swift introduction of the German Network Enforcement Act, which threatens major fines for social media companies failing to delete hate speech and fake news. Facebook has claimed that the law is not suitable, and could lead to social media companies censoring legal content to avoid fines.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close