TikTok has announced plans to lay off hundreds of content moderation staff in the United Kingdom, shifting these roles to other European offices as the company increasingly relies on artificial intelligence (AI) to manage online safety. The decision comes amid tightening UK regulations on social media platforms and ongoing concerns about the effectiveness of automated moderation methods.
Major Staff Reductions in UK Content Moderation
According to a statement from TikTok, the restructuring will result in significant job losses within its Trust and Safety team in London. The company confirmed that these changes are part of a broader realignment of its global moderation strategy, concentrating operations into fewer locations. โWe are continuing a reorganisation that we started last year to strengthen our global operating model for Trust and Safety, which includes concentrating our operations in fewer locations globally,โ a TikTok spokesperson told the BBC.
The impacted employees will have the opportunity to apply for other internal roles, with priority consideration if they meet the minimum requirements. In addition to the UK layoffs, hundreds of moderation roles in parts of Asia are also being affected.
TikTok employs a hybrid model of moderation, combining automated AI systems with human reviewers. The company claims that around 85% of content violating its policies is removed by automated processes. This AI investment, TikTok asserts, helps minimize the exposure of human moderators to distressing material.
Union Criticism and Worker Concerns
The planned layoffs have elicited strong responses from the Communication Workers Union (CWU), which represents TikTok moderation staff in the UK. John Chadfield, CWU National Officer for Tech, criticized the move as โputting corporate greed over the safety of workers and the public.โ
โWorkers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favour of hastily developed, immature AI alternatives,โ Chadfield said. He also highlighted the timing of the layoffs, which coincides with a push by workers to have their union formally recognized by TikTok.
โThe decision undermines the progress being made toward better worker representation and safeguards,โ added Chadfield.
Navigating New UK Online Safety Regulations
TikTokโs reorganisation and increased reliance on AI occur against the backdrop of enhanced regulatory scrutiny in the UK. The Online Safety Act, which came into effect in July 2024, imposes stringent requirements on social media companies to monitor and manage harmful content, particularly safeguarding children from inappropriate material.
The legislation allows the UKโs communications regulator, Ofcom, to fine platforms up to 10% of their global turnover for non-compliance. These regulatory pressures have incentivized companies like TikTok to invest in technologies that can scale content review processes.
In response to the Act, TikTok rolled out new parental control features last July, enabling parents to restrict specific accounts from interacting with their children and providing greater transparency around teenagersโ privacy settings. Nonetheless, the platform has faced criticism for insufficient protective measures.
In March 2024, the UKโs Information Commissionerโs Office (ICO) initiated a โmajor investigationโ into TikTok over concerns about its handling of childrenโs data and the exposure of minors to harmful content. TikTok has defended its approach, emphasizing that its recommender systems operate under โstrict and comprehensive measures that protect the privacy and safety of teens.โ
The Challenge of Balancing AI and Human Oversight
Industry experts note that while AI offers efficiency gains in content moderation, over-reliance poses risks related to accuracy and nuanced judgment. Dr. Sarah McDonald, a digital ethics researcher at the University of Oxford, explained, โAutomated systems can effectively identify clear-cut cases of policy violations at scale, but they often struggle with context-dependent content, cultural sensitivities, and emerging types of harmful behavior.โ
Human moderators provide essential judgment in these complex scenarios but are increasingly exposed to traumatic content, leading to well-documented mental health challenges. TikTokโs investment in AI aims to reduce human exposure to distressing material, a move widely regarded as beneficial for worker wellbeing. However, experts caution that premature reduction in human oversight risks missing subtleties that AI cannot yet detect.
Broader Industry Trends and Future Implications
TikTokโs move aligns with a broader trend among social media companies to streamline moderation teams by leveraging automation. Platforms like Meta, YouTube, and Twitter have similarly expanded AI capabilities, citing growing volumes of online content and escalating demands for rapid moderation.
However, this shift raises ongoing debates about digital safety, accountability, and transparency. Advocacy groups stress the necessity of maintaining robust human review systems, especially for content that involves vulnerable populations such as children and minority communities.
From a business perspective, TikTokโs consolidation of moderation functions may improve operational efficiency but could also affect public trust, particularly as regulatory bodies intensify oversight and enforcement.
Conclusion: A Critical Juncture for Social Media Safety
As TikTok navigates this period of significant change, the company and regulators face critical decisions regarding the balance between technology and human judgment in safeguarding users. Ensuring effective moderation while protecting worker welfare remains a complex challenge.
The UK governmentโs Online Safety Act underscores the rising expectations placed on digital platforms, and TikTokโs evolving approach will likely serve as a bellwether for future industry practices. Stakeholders across the technology, labor, and regulatory sectors will be closely watching how these developments unfold in the months ahead.
Para un anรกlisis mรกs detallado y una cobertura continua de los mercados laborales de EE.UU., las polรญticas comerciales, el gobierno del Reino Unido, las finanzas y los mercados, permanezca atento aย ย PGN Business Insider