Trolling of UK political candidates increases fourfold
Image credit: Dreamstime
Analysis of Twitter activity has found that the amount of abuse directed at political candidates has increased by approximately fourfold since the 2017 UK general election.
PoliMonitor, a political technology company, analysed 139,564 tweets sent approximately one month (November 11) before polling day (December 12), which either mentioned or replied to the candidates standing in the 2019 election.
The analysis found that the candidates received abuse or insulting remarks in 16.5 per cent of mentions or replies.
Perhaps unsurprisingly, the highest-profile candidates – the three main UK-wide party leaders – were targeted most frequently. Of the 515 candidates who were abused on this day, the top 150 candidates received 96 per cent of the bullying tweets.
Diane Abbott, who has previously spoken out about online abuse, was targeted with 885 abusive or insulting tweets in a single day. More than 50 MPs have chosen to stand down at this election, with four women from both sides of the House citing bullying as their motivation for standing down. In recent months, parliamentarians have raised the issue of using divisive (including gendered) language to attack political rivals, potentially endangering their lives.
The next cohort of parliamentarians will receive a “wellbeing induction session” upon their arrival at Westminster, which will include a discussion of cyber security and wellbeing.
“The safety and security of MPs and their staff both on the Parliamentary Estate and elsewhere is an absolute priority,” a Parliament spokesperson said. “We work closely with local police forces, who are responsible for the security of MPs and their staff away from the Parliamentary Estate, to ensure MPs are kept safe and are able to perform their duties.”
An Amnesty International report published in December 2018 found that 7.1 per cent of tweets directed at women in politics and journalism contained abusive or other bullying language.
All major social media companies have been criticised for failing to tackle abuse on their platforms. This week, YouTube announced that it would no longer permit creators to post videos which violently threaten others or contain abuse based on protected characteristics.
Before the introduction of the policy, YouTube could remove videos for “explicit threats of violence”; bullying based on appearance; doxxing (revealing a victim’s personal information, such as home address), or encouraging harassment of an individual.
YouTube's updated policy now allows for videos to be removed for “veiled” threats of violence, such as “You’d better look out”; simulated violence towards an individual (such as shallowfakes and deepfakes depicting an individual being beaten), and “malicious insults” based on protected characteristics such as race or sexual orientation.
YouTube confirmed that the new policy would apply to everyone, including politicians.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.