Welcome Your IET account

Tiredness could be ‘human signature’ used to detect bots on Twitter

A study has identified short-term behavioural differences between humans and bots – reflecting what is likely to be increasing tiredness towards the end of a social media session – which could be used to detect and take down networks of bots on social media.

Bots – which are controlled by computers, rather than by humans – serve a wide variety of purposes, including news aggregation and customer service. Despite their benefits, bots have come under scrutiny recently in the context of being used manipulatively as part of large-scale, state-backed projects to spread disinformation on social media platforms.

Concerns about the impact of deceptive bot networks spreading disinformation in order to influence democratic events, such as the 2016 US presidential election, had lead to calls from lawmakers, academics and campaigners for social media companies to detect and take down these networks. These efforts will include human moderators and machine learning algorithms trained to detect suspicious behaviour.

Now, a first-of-its-kind study published in Frontiers in Physics has identified some short-term behavioural trends seen in human-run accounts which are absent in bot accounts. This could provide a “human signature” to detect fake accounts, which are constantly adapting to fool detectors.

“Remarkably, bots continuously improve to mimic more and more of the behaviour humans typically exhibit on social media,” said Professor Emilio Ferrara, a University of Southern California computer science expert and co-author of the study. “Every time we identify a characteristic we think is prerogative of human behaviour, such as sentiment of topics of interest, we soon discover that newly developed open-source bots can now capture those aspects.”

Ferrara and his colleagues studied how the behaviour of humans and bots changed over the course of single sessions using a large Twitter dataset associated with recent political discussion. They monitored factors such as propensity to engage in various social interactions and volume and type of tweets they wrote, then compared the results between humans and bots.

They found that humans showed an increase in the amount of social interaction over the course of a session (an increase in the fraction of retweets, replies and mentions in a tweet) and a decrease in the amount of content they produce (a decrease in average tweet length). The researchers suggested that this could reflect humans becoming tired towards the end of the session and being less able or willing to produce original content. This behavioural change was not seen in bots.

The researchers used these results to inform a classification system for bot detection. They found that their model significantly outperformed a baseline model in its bot-detection accuracy, indicating that searching for short-term behavioural patterns like this could be valuable in the implementation and improvement of detection systems.

“Bots are constantly evolving: with fast-paced advances in AI, it’s possible to create ever-increasingly realistic bots that can mimic more and more how we talk and interact in online platforms,” said Ferrara. “We are continuously trying to identify dimensions that are particular to the behaviour of humans on social media that can in turn be used to develop more sophisticated toolkits to detect bots.”

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them