8 Sources
[1]
TikTok to lay off hundreds of UK moderators despite Online Safety Act
TikTok is poised to lay off hundreds of staff in London working on content moderation and security, just as the UK's Online Safety Act comes into full force requiring international tech companies to prevent the spread of dangerous material or face huge fines. All UK staff in the Chinese-owned group's trust and safety department received an email on Friday morning stating that "we are considering that moderation and quality assurance work would no longer be carried out at our London site", as it looks to automate more of that work using artificial intelligence. ByteDance-owned TikTok said several hundred jobs in its trust and safety team could be affected across the UK as well as south and south-east Asia, as it begins a collective consultation process, part of a global reorganisation of its content moderation efforts. "The proposed changes are intended to concentrate operation expertise in specific locations," according to the email, seen by the Financial Times, which said the company would hold a town-hall meeting with affected staff on Friday morning. The viral video platform also noted that "technological advances, such as the enhancement of large language models, are reshaping our approach". The Communication Workers Union estimates that there are about 300 people working in the company's trust and safety department in London, and the majority will be affected. The move comes just weeks after key parts of the UK's flagship Online Safety Act came into force, which required companies to introduce age checks on users attempting to access potentially harmful content. Companies that fail to comply with the new requirements -- as well as rules stipulating tech companies must remove dangerous and illegal material swiftly -- face penalties of up to £18mn, or 10 per cent of global turnover, whichever is greater. TikTok introduced new "age assurance" controls last month to comply with new requirements to limit the exposure of under-18s to harmful content. Like other social media groups YouTube and Meta, TikTok has said it plans to rely on machine-learning technology to "infer" a user's age based on how they use the site and who they communicate with. These AI-based systems have not yet been endorsed by the regulator Ofcom, which is assessing compliance. The decision to lay off staff comes amid a wider effort by the Chinese tech group to rationalise its European operations. It is particularly focusing on slimming down or shuttering moderation teams in individual markets and centralising those operations in regional hubs, such as Dublin and Lisbon, as part of a global reorganisation. TikTok this month announced it was shutting its trust and safety team in Berlin. TikTok said: "We are continuing a reorganisation that we started last year to strengthen our global operating model for Trust and Safety, which includes concentrating our operations in fewer locations globally to ensure that we maximise effectiveness and speed as we evolve this critical function for the company with the benefit of technological advancements." "They don't want to have human moderators, their goal is to have it all done by AI," said John Chadfield, a national organiser at the CWU, though he noted that the reality for the time being was that the company would relocate the activities to jurisdictions where labour was cheaper. "AI makes them sound smart and cutting-edge, but they're actually just going to offshore it," he said. The cuts come as TikTok's revenues continue to soar across the UK and Europe. Its latest accounts, published this week, show that revenues grew 38 per cent year on year in 2024 to $6.3bn, with pre-tax losses falling from $1.4bn in 2023 to $485mn last year. The figures, revealed in a UK regulatory filing, include TikTok's UK and European businesses. TikTok said in the filing: "We remain steadfastly committed to ensuring there are robust mechanisms in place to protect the privacy and safety of our users."
[2]
TikTok to lay off hundreds of UK content moderators
TikTok is planning to lay off hundreds of staff in the UK which moderate the content that appears on the social media platform. According to TikTok, the plan would see work moved to its other offices in Europe as it invests in the use of artificial intelligence (AI) to scale up its moderation. "We are continuing a reorganisation that we started last year to strengthen our global operating model for Trust and Safety, which includes concentrating our operations in fewer locations globally," a TikTok spokesperson told the BBC. But a spokesperson for the Communication Workers Union (CWU) said the decision was "putting corporate greed over the safety of workers and the public".
[3]
TikTok Shifts to AI Moderation With Mass Layoffs
Social media giant TikTok made a major symbolic move today by canning hundreds of UK and Asian moderators as it attempts to integrate artificial intelligence into more processes throughout the company. The Chinese tech giant said that workers displaced in the move will have priority in hiring if they meet unspecified criteria. The company did not disclose the exact number of people laid off from its 2,500 in the UK, the Wall Street Journal reports. The BBC reports that the move was immediately met with criticism from unions and online safety advocates. "[TikTok is] putting corporate greed over the safety of workers and the public,†John Chadfield, the national tech officer for the Communications Workers Union (CWU), told the BBC. “TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favor of hastily developed, immature AI alternatives,†Chadfield told the WSJ. The union also expressed concern to the BBC that the AI used may not be fully ready to handle moderation safely, making it potentially dangerous for vulnerable users. TikTok pushed back on that sentiment in a statement, saying that it has been using "comprehensive" AI to advance a remit focused on the safety of users and human moderators. “[TikTok] is continuing a reorganization that we started last year to strengthen our global operating model for Trust and Safety, which includes concentrating our operations in fewer locations globally,†it reads. TikTok has spent several years studying and adopting AI throughout its core businesses, it said, adding that it will use those tools “maximize effectiveness and speed†when moderating its social media platform. TikTok has already drawn scrutiny in the U.K. for its safety and compilation of users' personal information. The federal Information Commissioner's Office launched a probe in March into how the company uses the data of 13- to 17-year-olds. The company also pointed to new regulation from the United Kingdom in its statement, laws which have increased potential fines for non-compliance with national safety standards up to 10%. TikTok says it now needs more AI to meet the new regulatory bar set by UK’s Online Safety Act, which debuted in July. TikTok says its AI automatically removes about 85% of posts not in compliance. It did not provide evidence to confirm that claim.
[4]
Hundreds of TikTok UK moderator jobs at risk despite new online safety rules
Cuts in trust and safety team part of switch towards artificial intelligence by social media app TikTok has put hundreds of UK content moderators' jobs at risk, even as tighter rules come into effect to stop the spread of harmful material online. The viral video app said several hundred jobs in its trust and safety team could be affected in the UK, as well as south and south-east Asia, as part of a global reorganisation. Their work will be reallocated to other European offices and third-party providers, with some trust and safety jobs remaining in the UK, the company said. It is part of a wider move at TikTok to rely on artificial intelligence for moderation. More than 85% of the content removed for violating its community guidelines is identified and taken down by automation, according to the platform. The cuts come despite the recent introduction of new UK online safety rules, which require companies to introduce age checks on users attempting to view potentially harmful content. Companies can be fined up to £18m or 10% of global turnover for breaches, whichever is greater. John Chadfield of the Communication Workers Union said replacing workers with AI in content moderation could put the safety of millions of TikTok users at risk. "TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favour of hastily developed, immature AI alternatives," he said. TikTok, which is owned by the Chinese tech group ByteDance, employs more than 2,500 staff in the UK. Over the past year, TikTok has been cutting trust and safety staff across the world, often substituting workers with automated systems. In September, the company fired its entire team of 300 content moderators in the Netherlands. In October, it then announced it would replace about 500 content moderation employees in Malaysia as part of its shift towards AI. Meanwhile, business at TikTok is booming. Accounts filed to Companies House this week, which include its operations in the UK and Europe, showed revenues grew 38% to $6.3bn (£4.7bn) in 2024 compared with the year prior. Its operating loss narrowed from $1.4bn in 2023 to $485m. A TikTok spokesperson said the company was "continuing a reorganisation that we started last year to strengthen our global operating model for trust and safety, which includes concentrating our operations in fewer locations globally to ensure that we maximise effectiveness and speed as we evolve this critical function for the company with the benefit of technological advancements".
[5]
TikTok's UK content moderation jobs at risk in AI shift
Social media platform TikTok announced on Friday it will restructure its UK trust and safety operations, putting several hundred jobs at risk as it shifts to AI-assisted content moderation. The move is part of global restructuring plans by TikTok, owned by China-based ByteDance, which also affects moderator jobs in South and Southeast Asia, notably in Malaysia. "We are continuing a reorganization that we started last year... concentrating our operations in fewer locations globally," a TikTok spokesperson told AFP. TikTok added that it plans to reshape content moderation "with the benefit of technological advancements." Content moderators are tasked with keeping content such as hate speech, misinformation and pornography off the platform, which has more than 1.5 billion users worldwide. But, globally, there is a trend of social media companies reducing their use of human fact-checkers and turning to AI instead. Moderation technologies, including AI, take down over 85% of content removed for violating TikTok's guidelines, according to the company. It also said it uses AI to help reduce the amount of distressing content moderators are exposed to. Under the proposed plans, the work of employees affected by layoffs will be relocated to other European offices and some third-party providers. "TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favor of hastily developed, immature AI alternatives," said Communication Workers Union national officer John Chadfield. He added that the layoffs "put TikTok's millions of British users at risk." TikTok in June announced plans to increase investment in the UK, its biggest community in Europe, with the creation of 500 more jobs. Around half the UK population, more than 30 million people, use TikTok each month. The video-sharing platform has been in the crosshairs of Western governments for years over fears personal data could be used by China for espionage or propaganda purposes. AFP, among more than a dozen other fact-checking organizations, is paid by TikTok in several countries to verify videos that potentially contain false information.
[6]
TikTok's UK content moderation jobs at risk in AI shift
London (AFP) - Social media platform TikTok announced on Friday it will restructure its UK trust and safety operations, putting several hundred jobs at risk as it shifts to AI-assisted content moderation. The move is part of global restructuring plans by TikTok, owned by China-based ByteDance, which also affects moderator jobs in South and Southeast Asia, notably in Malaysia. "We are continuing a reorganisation that we started last year... concentrating our operations in fewer locations globally," a TikTok spokesperson told AFP. TikTok added that it plans to reshape content moderation "with the benefit of technological advancements." Content moderators are tasked with keeping content such as hate speech, misinformation and pornography off the platform, which has more than 1.5 billion users worldwide. But, globally, there is a trend of social media companies reducing their use of human fact-checkers and turning to AI instead. Moderation technologies, including AI, take down over 85 percent of content removed for violating TikTok's guidelines, according to the company. It also said it uses AI to help reduce the amount of distressing content moderators are exposed to. Under the proposed plans, the work of employees affected by layoffs will be relocated to other European offices and some third-party providers. "TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favour of hastily developed, immature AI alternatives," said Communication Workers Union national officer John Chadfield. He added that the layoffs "put TikTok's millions of British users at risk." TikTok in June announced plans to increase investment in the UK, its biggest community in Europe, with the creation of 500 more jobs. Around half the UK population, more than 30 million people, use TikTok each month. The video-sharing platform has been in the crosshairs of Western governments for years over fears personal data could be used by China for espionage or propaganda purposes. AFP, among more than a dozen other fact-checking organisations, is paid by TikTok in several countries to verify videos that potentially contain false information.
[7]
TikTok puts hundreds of UK jobs at risk
TikTok is putting hundreds of jobs at risk in the UK, as it turns to artificial intelligence to assess problematic content. In a statement, the video-sharing app said a global restructuring is taking place that means it is "concentrating our operations in fewer locations". Layoffs are set to affect those working in its trust and safety departments, who work on content moderation. The tech giant currently employs more than 2,500 people in the UK, and is due to open a new office in central London next year.
[8]
TikTok to Lay Off Content Moderators and Adopt AI-Powered Solutions | PYMNTS.com
The social media platform is set to lay off content moderation and security staff in London, south Asia and southeast Asia, the Financial Times (FT) reported Friday (Aug. 22), citing an internal email sent to TikTok's trust and safety department staff. TikTok did not immediately reply to PYMNTS' request for comment. According to the FT report, the company said in the email that the changes in staffing "are intended to concentrate operation expertise in specific locations" and that "technological advances, such as the enhancement of large language models, are reshaping our approach." TikTok announced earlier this month that it is shutting down its trust and safety operation in Berlin, the report said. The report of the latest layoffs came a week before the company's staff in London were set to vote on unionization, according to the report. It also came weeks after the implementation of parts of the United Kingdom's Online Safety Act, which requires tech companies to quickly remove dangerous and illegal content from their platforms, per the report. It was reported in July 2023 that the European Union's governing body conducted a stress test at TikTok's Dublin offices and found that the company was not yet compliant with the moderation protocols in the EU's Digital Services Act, which had not yet been implemented. However, an EU commissioner commended the company for its voluntary agreement to undergo the test and commit resources to ensuring compliance. In March 2024, on-demand ordering and delivery platform DoorDashsaid it added an AI feature designed to detect and prevent verbal abuse or harassment on its platform. The company said this SafeChat+ feature is meant to protect both customers and delivery drivers. Social media platform X said in January 2024 that it was adding 100 content moderators to police child abuse content. When AI startup OpenAI launched its large language model GPT-4 in August 2023, the company suggested that it could be used to develop AI-assisted content moderation systems that would reduce the need for human intervention.
Share
Copy Link
TikTok announces plans to lay off hundreds of UK content moderators as part of a global reorganization, shifting towards AI-assisted moderation despite new online safety regulations.
TikTok, the popular social media platform owned by ByteDance, has announced a significant restructuring of its content moderation operations, with plans to lay off hundreds of staff in the UK 12. This move is part of a global reorganization aimed at concentrating operations in fewer locations and leveraging artificial intelligence (AI) for content moderation 3.
Source: BBC
The restructuring primarily affects TikTok's trust and safety department in London, with an estimated 300 people potentially losing their jobs 1. The Communication Workers Union (CWU) has expressed concern over the layoffs, stating that it puts "corporate greed over the safety of workers and the public" 2.
TikTok's decision to reduce human moderators in favor of AI-driven solutions marks a significant shift in its approach to content management. The company claims that over 85% of content removed for violating community guidelines is already identified and taken down by automation 4. This move towards AI is part of TikTok's strategy to "maximize effectiveness and speed" in evolving its trust and safety function 1.
The restructuring comes at a crucial time, coinciding with the implementation of the UK's Online Safety Act. This new legislation requires companies to introduce age checks and swiftly remove dangerous and illegal material, with potential fines of up to £18 million or 10% of global turnover for non-compliance 14.
TikTok's reorganization extends beyond the UK, affecting moderation teams in south and south-east Asia as well 1. The company has been centralizing its moderation efforts in regional hubs such as Dublin and Lisbon 1. Despite these changes, TikTok's business continues to thrive, with revenues growing 38% year-on-year to $6.3 billion in 2024 14.
Source: Tech Xplore
Critics, including union representatives and online safety advocates, have raised concerns about the readiness of AI systems to handle content moderation effectively. John Chadfield of the CWU warned about the "real-world costs of cutting human moderation teams in favor of hastily developed, immature AI alternatives" 34.
TikTok maintains that the reorganization is part of an ongoing effort to strengthen its global operating model for trust and safety. The company states that it has been studying and adopting AI throughout its core businesses for several years and believes this approach will help meet the new regulatory requirements set by the UK's Online Safety Act 35.
Source: Financial Times News
As TikTok navigates this transition, the effectiveness of AI in content moderation and the impact on user safety will be closely watched by regulators, users, and industry observers alike.
Apple is in early talks with Google to potentially use Gemini AI for a Siri revamp, signaling a shift in Apple's AI strategy as it faces delays in its own development efforts.
18 Sources
Technology
14 hrs ago
18 Sources
Technology
14 hrs ago
Meta has announced a partnership with Midjourney to license their AI image and video generation technology, aiming to enhance Meta's AI capabilities and compete with industry leaders in creative AI.
8 Sources
Technology
14 hrs ago
8 Sources
Technology
14 hrs ago
As artificial intelligence becomes an integral part of daily life, its significant energy consumption and environmental impact are coming under scrutiny. This article explores the hidden climate costs associated with AI usage and data centers, and suggests ways to mitigate these effects.
6 Sources
Technology
14 hrs ago
6 Sources
Technology
14 hrs ago
NVIDIA introduces Spectrum-XGS Ethernet, a revolutionary networking technology designed to connect distributed data centers into giga-scale AI super-factories, addressing the growing demands of AI computation and infrastructure.
3 Sources
Technology
22 hrs ago
3 Sources
Technology
22 hrs ago
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
6 hrs ago
2 Sources
Technology
6 hrs ago