TikTok is facing the threat of legal action over cuts to its UK online safety teams, following the redundancy of hundreds of content moderators earlier this year.
In August, the social media platform announced that more than 400 jobs would be cut, with artificial intelligence replacing some roles and other moderation work being rehired overseas. The company is now accused of targeting safety workers just days before they were due to vote on forming a trade union.
Two UK-based moderators have sent a formal legal letter to TikTok, setting out the basis for a potential claim alleging unlawful detriment and automatic unfair dismissal. TikTok has been given one month to respond.
Unlawful detriment occurs when an employer treats a worker unfairly for exercising a protected employment right, such as taking part in union activity, requesting flexible working, or whistleblowing.
Stella Caram, head of legal at the non-profit organisation Foxglove, which is supporting the moderators, said the timeline raised serious concerns.
“In June, TikTok said it was going to hire hundreds more content moderators, then two months later, they fired everyone,” she said.
“What changed? Workers exercised their legal right to try to form a trade union. This is obvious, blatant and unlawful union-busting.”
TikTok has strongly rejected the allegations. A spokesperson said: “We once again strongly reject this baseless claim.
“These changes were part of a wider global reorganisation, as we evolve our global operating model for Trust and Safety with the benefit of technological advancements to continue maximising safety for our users.”
The moderators bringing the case are working alongside the United Tech & Allied Workers union, Foxglove, and law firm Leigh Day.
Concerns over user safety have also been raised publicly. In exclusive interviews last month, three whistleblowers told Sky News that the job cuts could put UK users at risk. That warning was echoed by Julio Miguel Franco, one of the moderators involved in the legal action.
“TikTok needs to tell the truth,” he said.
“When it says AI can do our job of keeping people safe on TikTok, it knows that’s rubbish.”
“Instead, they want to steal our jobs and send them to other countries where they can pay people less and treat them worse. The end result is TikTok becomes less safe for everyone.”
Internal documents seen by Sky News suggest TikTok had planned to retain human moderators in London until at least the end of 2025, citing the growing volume and complexity of content requiring human review.
Earlier this year, TikTok’s head of governance, Ali Law, told MPs that “human moderators … have to use their nuance, skills and training” to tackle harmful content, misinformation and hate speech.
Following correspondence between MPs and TikTok, Dame Chi Onwurah, chair of the science and technology select committee, said she was “deeply” concerned about the impact of the cuts.
“There is a real risk to the lives of TikTok users,” she said.
However, Mr Law later insisted that safety standards would not be compromised.
“We set a high benchmark when it comes to rolling out new moderation technology,” he said.
“We also make sure the changes are introduced on a gradual basis with human oversight so that if there isn’t a level of delivery in line with what we expect, we can address that.”
