TikTok logo

TikTok enhances safety policies around young people and online challenges

Image credit: Alexander Shatov | Unsplash

TikTok has unveiled a series of updates to its safety measures designed to better protect young people on the platform, following a major report into how its users interact with potentially harmful content.

Research conducted by the popular social media video app looked at how young people engaged with online challenges and hoaxes, including disturbing and harmful ones which attempt to coax viewers into self-harm or suicide.

TikTok said it will now start removing “alarmist warnings” about potentially harmful online challenges and hoaxes because its research found these warnings can exacerbate the problem by treating the hoax as real.

The study of 10,000 teenagers, parents and teachers from around the world – including the UK – found that while only 0.3 per cent of young people said they had participated in an online challenge they would categorise as very dangerous, nearly half of those asked said they wanted more information and help on how to better understand risk.

It comes as social media platforms continue to face scrutiny over how they police harmful content on their sites and the steps they take to protect their users – particularly younger ones.

As part of its policies updates, in response to the research, TikTok said the technology it uses to alert its safety teams to increases in prohibited content linked to hashtags will now be extended to also capture “potentially dangerous behaviour” that attempts to hijack or piggyback on an otherwise common hashtag.

TikTok said it would also “improve the language” used in its content warning labels to encourage users to visit its Safety Centre for more information and add new materials to that space aimed at parents and carers unsure on how to discuss the subject with children.

Alexandra Evans, TikTok’s head of safety public policy for Europe, said the aim of the project was to “better understand young people’s engagement with potentially harmful challenges and hoaxes”.

Evans said: “While not unique to any one platform, the effects and concerns are felt by all and we wanted to learn how we might develop even more effective responses as we work to better support teens, parents, and educators,” adding that the company wanted to help “contribute to a wider understanding of this area”.

She said: “For our part, we know the actions we’re taking now are just some of the important work that needs to be done across our industry and we will continue to explore and implement additional measures on behalf of our community.”

TikTok has made a number of updates to its platform over the last year, particularly in areas around safety for younger users. The app has increased the default privacy settings and reduced access to direct messaging features for younger users, as well as expanding its Safety Centre and online resources to offer more information and guidance.

The move comes as MPs and peers continue to scrutinise the draft of the much-delayed Online Safety Bill, which intends to introduce substantial regulation to social media platforms, with large fines, sites being blocked and the potential to hold senior managers criminally liable for rule breaches among the possible penalties for failing to protect users from harmful content.

The draft of the Bill was finally published in May this year, two years after the opening white paper was published. The NSPCC almost immediately called on the government to go further with it to prevent the spread of inappropriate content across multiple platforms, in order to tackle sexual harassment in schools.

Even before the draft Bill was published, a study by the 5Rights Foundation – which campaigns for regulation and agreements to enable children to stay safe online – made the case that the government’s plans for online safety legislation were being undermined by “exemptions and caveats” that will give tech companies excuses to avoid responsibility.

In September, research carried out by anti-abuse campaign group Hope not Hate suggested that social media companies are not trusted by the public to deal with the problem of online abuse and hateful content and that there was majority support among the public for increased regulation.

Earlier this month, it was announced that the Online Safety Bill will "broadly" equip Ofcom with what it needs to regulate tech firms, although the watchdog’s chief executive, Dame Melanie Dawes, acknowledged that keeping social networks in check will be “really challenging” and suggested some areas of the proposed laws should be much tougher.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles