11 Sources
11 Sources
[1]
TikTok Lays Off Hundreds More Content Moderators in AI Push
Social media giants like Meta have been cutting professional content moderators en masse in favor of community-based moderation, while platforms like Musk's X now operate vastly reduced human content moderation teams compared to its Twitter days. Now, TikTok plans to lay off hundreds of content moderators in a fresh AI pivot. The move, reported by The Wall Street Journal, will mainly impact members of a 2,500-person team based in the UK, but many employees from South and Southeast Asia will also be affected. However, exact numbers were not disclosed. Over 85% of content removed from the platform for violating its guidelines is already identified and taken down by AI, TikTok told the Journal. This isn't the first time that TikTok has announced major layoffs to its moderation teams in favor of automation. In late 2024, the ByteDance-owned platform cut 500 employees from the team, mainly based in Malaysia. In July, trade union group ver.di said that around 150 employees in its office in Berlin, Germany were set to replaced by AI models. The sources didn't reveal why it picked now to cut moderators from its UK team. Last month, the Online Safety Act came into force, meaning that online platforms operating in the UK could be fined up to 10% of their global turnover, or up to £18 million -- whichever is larger -- if they fail to protect minors from harmful content. However, The Financial Times noted that the layoffs came just one week before staff in London were due to vote on unionization, a move it said the company's management had been resisting, citing sources within the company. "We are continuing a reorganization that we started last year to strengthen our global operating model for Trust and Safety, which includes concentrating our operations in fewer locations globally to ensure that we maximize effectiveness and speed as we evolve this critical function for the company with the benefit of technological advancements," an official TikTok spokesperson told the FT. John Chadfield, a national organizer at the Communication Workers Union, argued in a statement to the FT that TikTok doesn't "want to have human moderators, their goal is to have it all done by AI." "AI makes them sound smart and cutting-edge, but they're actually just going to offshore it," he added.
[2]
TikTok to lay off hundreds of UK moderators despite Online Safety Act
TikTok is poised to lay off hundreds of staff in London working on content moderation and security, just as the UK's Online Safety Act comes into full force requiring international tech companies to prevent the spread of dangerous material or face huge fines. All UK staff in the Chinese-owned group's trust and safety department received an email on Friday morning stating that "we are considering that moderation and quality assurance work would no longer be carried out at our London site", as it looks to automate more of that work using artificial intelligence. ByteDance-owned TikTok said several hundred jobs in its trust and safety team could be affected across the UK as well as south and south-east Asia, as it begins a collective consultation process, part of a global reorganisation of its content moderation efforts. "The proposed changes are intended to concentrate operation expertise in specific locations," according to the email, seen by the Financial Times, which said the company would hold a town-hall meeting with affected staff on Friday morning. The viral video platform also noted that "technological advances, such as the enhancement of large language models, are reshaping our approach". The Communication Workers Union estimates that there are about 300 people working in the company's trust and safety department in London, and the majority will be affected. The move comes just weeks after key parts of the UK's flagship Online Safety Act came into force, which required companies to introduce age checks on users attempting to access potentially harmful content. Companies that fail to comply with the new requirements -- as well as rules stipulating tech companies must remove dangerous and illegal material swiftly -- face penalties of up to £18mn, or 10 per cent of global turnover, whichever is greater. TikTok introduced new "age assurance" controls last month to comply with new requirements to limit the exposure of under-18s to harmful content. Like other social media groups YouTube and Meta, TikTok has said it plans to rely on machine-learning technology to "infer" a user's age based on how they use the site and who they communicate with. These AI-based systems have not yet been endorsed by the regulator Ofcom, which is assessing compliance. The decision to lay off staff comes amid a wider effort by the Chinese tech group to rationalise its European operations. It is particularly focusing on slimming down or shuttering moderation teams in individual markets and centralising those operations in regional hubs, such as Dublin and Lisbon, as part of a global reorganisation. TikTok this month announced it was shutting its trust and safety team in Berlin. TikTok said: "We are continuing a reorganisation that we started last year to strengthen our global operating model for Trust and Safety, which includes concentrating our operations in fewer locations globally to ensure that we maximise effectiveness and speed as we evolve this critical function for the company with the benefit of technological advancements." "They don't want to have human moderators, their goal is to have it all done by AI," said John Chadfield, a national organiser at the CWU, though he noted that the reality for the time being was that the company would relocate the activities to jurisdictions where labour was cheaper. "AI makes them sound smart and cutting-edge, but they're actually just going to offshore it," he said. The cuts come as TikTok's revenues continue to soar across the UK and Europe. Its latest accounts, published this week, show that revenues grew 38 per cent year on year in 2024 to $6.3bn, with pre-tax losses falling from $1.4bn in 2023 to $485mn last year. The figures, revealed in a UK regulatory filing, include TikTok's UK and European businesses. TikTok said in the filing: "We remain steadfastly committed to ensuring there are robust mechanisms in place to protect the privacy and safety of our users."
[3]
TikTok to lay off hundreds of UK content moderators
TikTok is planning to lay off hundreds of staff in the UK which moderate the content that appears on the social media platform. According to TikTok, the plan would see work moved to its other offices in Europe as it invests in the use of artificial intelligence (AI) to scale up its moderation. "We are continuing a reorganisation that we started last year to strengthen our global operating model for Trust and Safety, which includes concentrating our operations in fewer locations globally," a TikTok spokesperson told the BBC. But a spokesperson for the Communication Workers Union (CWU) said the decision was "putting corporate greed over the safety of workers and the public".
[4]
TikTok lays off hundres of content moderators, replaces them with AI
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. What just happened? It's a sad case of another day, another company laying off hundreds of people as their jobs become automated by AI. This time, the firm in question is TikTok, which is replacing human moderators in favor of artificial intelligence. In what is certainly a coincidence, the layoffs at the London site come just a week before employees were due to vote on unionization. TikTok's layoffs will mostly impact members of a 2,500-person team based in the UK, though many in South and Southeast Asia will also be affected, writes The Wall Street Journal. The Communication Workers Union (CWU) says around 300 employees at London's trust and safety department are expected to be affected by the cuts. The move is part of a larger global restructuring push that will see operations centralized in regional hubs such as Lisbon and Dublin. TikTok also wants to further integrate technical advances such as "the enhancement of large language models" in its moderation approach. The timing of the layoffs is certainly suspicious. It comes a week before workers at the London site were due to vote on unionization. Sources say company management have been resisting the move, leading to accusations of union-busting from the CWU. John Chadfield, a national organizer at the CWU, told The Financial Times that TikTok doesn't "want to have human moderators, their goal is to have it all done by AI." The ByteDance-owned firm told the FT that over 85% of content taken down from its platform for violating its guidelines is identified and removed by AI. This isn't the first time TikTok has decided that AI can do a better job of moderating platform content than humans. It laid off 500 employees in Malaysia last year, replacing them with AI in what was again framed as a consolidation of certain regional operations. In July this year, ByteDance workers in Berlin took part in a one-day strike to protest around 150 employees being replaced by AI. Those cuts involved phasing out the entire Trust and Safety department responsible for content moderation. Despite so many promises that the technology would merely augment jobs and leave employees to do more creative tasks, AI is replacing humans at a depressingly regular rate right now. The latest layoffs were at Cisco, whose CEO had just claimed that AI wouldn't cost jobs. Elsewhere, Coinbase's CEO revealed he fired engineers who refused to use AI after it was introduced at the crypto firm.
[5]
TikTok Shifts to AI Moderation With Mass Layoffs
Social media giant TikTok made a major symbolic move today by canning hundreds of UK and Asian moderators as it attempts to integrate artificial intelligence into more processes throughout the company. The Chinese tech giant said that workers displaced in the move will have priority in hiring if they meet unspecified criteria. The company did not disclose the exact number of people laid off from its 2,500 in the UK, the Wall Street Journal reports. The BBC reports that the move was immediately met with criticism from unions and online safety advocates. "[TikTok is] putting corporate greed over the safety of workers and the public,†John Chadfield, the national tech officer for the Communications Workers Union (CWU), told the BBC. “TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favor of hastily developed, immature AI alternatives,†Chadfield told the WSJ. The union also expressed concern to the BBC that the AI used may not be fully ready to handle moderation safely, making it potentially dangerous for vulnerable users. TikTok pushed back on that sentiment in a statement, saying that it has been using "comprehensive" AI to advance a remit focused on the safety of users and human moderators. “[TikTok] is continuing a reorganization that we started last year to strengthen our global operating model for Trust and Safety, which includes concentrating our operations in fewer locations globally,†it reads. TikTok has spent several years studying and adopting AI throughout its core businesses, it said, adding that it will use those tools “maximize effectiveness and speed†when moderating its social media platform. TikTok has already drawn scrutiny in the U.K. for its safety and compilation of users' personal information. The federal Information Commissioner's Office launched a probe in March into how the company uses the data of 13- to 17-year-olds. The company also pointed to new regulation from the United Kingdom in its statement, laws which have increased potential fines for non-compliance with national safety standards up to 10%. TikTok says it now needs more AI to meet the new regulatory bar set by UK’s Online Safety Act, which debuted in July. TikTok says its AI automatically removes about 85% of posts not in compliance. It did not provide evidence to confirm that claim.
[6]
Hundreds of TikTok UK moderator jobs at risk despite new online safety rules
Cuts in trust and safety team part of switch towards artificial intelligence by social media app TikTok has put hundreds of UK content moderators' jobs at risk, even as tighter rules come into effect to stop the spread of harmful material online. The viral video app said several hundred jobs in its trust and safety team could be affected in the UK, as well as south and south-east Asia, as part of a global reorganisation. Their work will be reallocated to other European offices and third-party providers, with some trust and safety jobs remaining in the UK, the company said. It is part of a wider move at TikTok to rely on artificial intelligence for moderation. More than 85% of the content removed for violating its community guidelines is identified and taken down by automation, according to the platform. The cuts come despite the recent introduction of new UK online safety rules, which require companies to introduce age checks on users attempting to view potentially harmful content. Companies can be fined up to £18m or 10% of global turnover for breaches, whichever is greater. John Chadfield of the Communication Workers Union said replacing workers with AI in content moderation could put the safety of millions of TikTok users at risk. "TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favour of hastily developed, immature AI alternatives," he said. TikTok, which is owned by the Chinese tech group ByteDance, employs more than 2,500 staff in the UK. Over the past year, TikTok has been cutting trust and safety staff across the world, often substituting workers with automated systems. In September, the company fired its entire team of 300 content moderators in the Netherlands. In October, it then announced it would replace about 500 content moderation employees in Malaysia as part of its shift towards AI. Meanwhile, business at TikTok is booming. Accounts filed to Companies House this week, which include its operations in the UK and Europe, showed revenues grew 38% to $6.3bn (£4.7bn) in 2024 compared with the year prior. Its operating loss narrowed from $1.4bn in 2023 to $485m. A TikTok spokesperson said the company was "continuing a reorganisation that we started last year to strengthen our global operating model for trust and safety, which includes concentrating our operations in fewer locations globally to ensure that we maximise effectiveness and speed as we evolve this critical function for the company with the benefit of technological advancements".
[7]
TikTok's UK content moderation jobs at risk in AI shift
Social media platform TikTok announced on Friday it will restructure its UK trust and safety operations, putting several hundred jobs at risk as it shifts to AI-assisted content moderation. The move is part of global restructuring plans by TikTok, owned by China-based ByteDance, which also affects moderator jobs in South and Southeast Asia, notably in Malaysia. "We are continuing a reorganization that we started last year... concentrating our operations in fewer locations globally," a TikTok spokesperson told AFP. TikTok added that it plans to reshape content moderation "with the benefit of technological advancements." Content moderators are tasked with keeping content such as hate speech, misinformation and pornography off the platform, which has more than 1.5 billion users worldwide. But, globally, there is a trend of social media companies reducing their use of human fact-checkers and turning to AI instead. Moderation technologies, including AI, take down over 85% of content removed for violating TikTok's guidelines, according to the company. It also said it uses AI to help reduce the amount of distressing content moderators are exposed to. Under the proposed plans, the work of employees affected by layoffs will be relocated to other European offices and some third-party providers. "TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favor of hastily developed, immature AI alternatives," said Communication Workers Union national officer John Chadfield. He added that the layoffs "put TikTok's millions of British users at risk." TikTok in June announced plans to increase investment in the UK, its biggest community in Europe, with the creation of 500 more jobs. Around half the UK population, more than 30 million people, use TikTok each month. The video-sharing platform has been in the crosshairs of Western governments for years over fears personal data could be used by China for espionage or propaganda purposes. AFP, among more than a dozen other fact-checking organizations, is paid by TikTok in several countries to verify videos that potentially contain false information.
[8]
TikTok's UK content moderation jobs at risk in AI shift
London (AFP) - Social media platform TikTok announced on Friday it will restructure its UK trust and safety operations, putting several hundred jobs at risk as it shifts to AI-assisted content moderation. The move is part of global restructuring plans by TikTok, owned by China-based ByteDance, which also affects moderator jobs in South and Southeast Asia, notably in Malaysia. "We are continuing a reorganisation that we started last year... concentrating our operations in fewer locations globally," a TikTok spokesperson told AFP. TikTok added that it plans to reshape content moderation "with the benefit of technological advancements." Content moderators are tasked with keeping content such as hate speech, misinformation and pornography off the platform, which has more than 1.5 billion users worldwide. But, globally, there is a trend of social media companies reducing their use of human fact-checkers and turning to AI instead. Moderation technologies, including AI, take down over 85 percent of content removed for violating TikTok's guidelines, according to the company. It also said it uses AI to help reduce the amount of distressing content moderators are exposed to. Under the proposed plans, the work of employees affected by layoffs will be relocated to other European offices and some third-party providers. "TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favour of hastily developed, immature AI alternatives," said Communication Workers Union national officer John Chadfield. He added that the layoffs "put TikTok's millions of British users at risk." TikTok in June announced plans to increase investment in the UK, its biggest community in Europe, with the creation of 500 more jobs. Around half the UK population, more than 30 million people, use TikTok each month. The video-sharing platform has been in the crosshairs of Western governments for years over fears personal data could be used by China for espionage or propaganda purposes. AFP, among more than a dozen other fact-checking organisations, is paid by TikTok in several countries to verify videos that potentially contain false information.
[9]
TikTok puts hundreds of UK jobs at risk
TikTok is putting hundreds of jobs at risk in the UK, as it turns to artificial intelligence to assess problematic content. In a statement, the video-sharing app said a global restructuring is taking place that means it is "concentrating our operations in fewer locations". Layoffs are set to affect those working in its trust and safety departments, who work on content moderation. The tech giant currently employs more than 2,500 people in the UK, and is due to open a new office in central London next year.
[10]
TikTok to Lay Off Content Moderators and Adopt AI-Powered Solutions | PYMNTS.com
The social media platform is set to lay off content moderation and security staff in London, south Asia and southeast Asia, the Financial Times (FT) reported Friday (Aug. 22), citing an internal email sent to TikTok's trust and safety department staff. TikTok did not immediately reply to PYMNTS' request for comment. According to the FT report, the company said in the email that the changes in staffing "are intended to concentrate operation expertise in specific locations" and that "technological advances, such as the enhancement of large language models, are reshaping our approach." TikTok announced earlier this month that it is shutting down its trust and safety operation in Berlin, the report said. The report of the latest layoffs came a week before the company's staff in London were set to vote on unionization, according to the report. It also came weeks after the implementation of parts of the United Kingdom's Online Safety Act, which requires tech companies to quickly remove dangerous and illegal content from their platforms, per the report. It was reported in July 2023 that the European Union's governing body conducted a stress test at TikTok's Dublin offices and found that the company was not yet compliant with the moderation protocols in the EU's Digital Services Act, which had not yet been implemented. However, an EU commissioner commended the company for its voluntary agreement to undergo the test and commit resources to ensuring compliance. In March 2024, on-demand ordering and delivery platform DoorDashsaid it added an AI feature designed to detect and prevent verbal abuse or harassment on its platform. The company said this SafeChat+ feature is meant to protect both customers and delivery drivers. Social media platform X said in January 2024 that it was adding 100 content moderators to police child abuse content. When AI startup OpenAI launched its large language model GPT-4 in August 2023, the company suggested that it could be used to develop AI-assisted content moderation systems that would reduce the need for human intervention.
[11]
TikTok Layoffs in the UK with AI Stepping In: What's Really Happening?
TikTok is Cutting Hundreds of UK Moderator Jobs as AI Takes Over Safety Checks. Big Shift in Trust and Safety Sparks Fresh Debate TikTok layoffs in the UK have impacted hundreds of content moderators as artificial intelligence takes over the task of performing safety checks. The company is reducing human moderator roles in London and shifting the work to other offices in Europe. Reports suggest around 300 people from the London team will lose their job roles. Staff were told that moderation and quality checks will no longer be handled from the London office. A town hall meeting was held to explain the changes.
Share
Share
Copy Link
TikTok is laying off hundreds of content moderators in the UK and Asia as part of a global reorganization, shifting towards AI-powered moderation. This move comes amid new UK online safety regulations and union concerns.
TikTok, the popular social media platform owned by ByteDance, is implementing a significant change in its content moderation strategy. The company plans to lay off hundreds of content moderators, primarily in the UK and parts of South and Southeast Asia, as it shifts towards AI-powered moderation
1
2
.Source: TechSpot
While exact numbers have not been disclosed, the layoffs will mainly affect members of a 2,500-person team based in the UK
1
. The Communication Workers Union (CWU) estimates that around 300 employees in London's trust and safety department will be impacted3
. TikTok has stated that over 85% of content removed from its platform for violating guidelines is already identified and taken down by AI1
.The timing of these layoffs is notable for several reasons:
2
.1
4
.1
4
.TikTok frames this shift as a strategic reorganization to enhance its global operating model for Trust and Safety. The company cites technological advancements, particularly in large language models, as a key factor in reshaping its approach to content moderation
2
. This trend is not unique to TikTok; other social media giants like Meta have also been reducing human content moderation teams in favor of AI and community-based moderation1
.Related Stories
The layoffs have sparked concerns among unions and online safety advocates:
1
2
.5
.2
.The layoffs come at a critical time in terms of regulatory compliance:
1
2
.2
.5
.As TikTok continues to grow, with its UK and European revenues increasing by 38% to $6.3 billion in 2024
2
, the company faces the challenge of balancing rapid expansion, regulatory compliance, and the ethical implications of AI-driven content moderation.Source: Analytics Insight
Source: Financial Times News
Summarized by
Navi
[1]
[2]
11 Oct 2024•Business and Economy
18 Jul 2025•Business and Economy
11 Jul 2025•Business and Economy