2 Sources
[1]
Monday Morning Moan - plead the Fifth, count the money! A new morality for an era screaming out for effective safety regulation
In 2025, few would deny that the technology sector has become more politicized than usual, barely a month into the new year. All it took was a tech CEO, Elon Musk, being given a White House position - and a phalanx of his peers watching the President roll back civil rights - to re-contextualise the entire industry. So, welcome to an 'anything goes' world, where my fear is that technology companies can basically do whatever they please, regardless of any harm - direct or indirect - that might cause people. And all that is happening at a time when the worst aspects of digital innovation are just beginning to have an impact - deep fakes, social hate speech, disinformation, wholesale job automation, spear-phishing scams, synthetic content usurping the human originals... the list goes on. Soon none of us will be able to trust the evidence of our senses. In such a high-risk context, what price does online safety have? And how to protect children, women, minorities, and vulnerable adults online when nobody in power seems to give a damn? Remember, on day one of his Presidency, Trump used an Executive Order to tear up the very concept of safe AI, while industry CEOs appeared to endorse the rolling back civil rights. That's not an overstatement. A former critic of Trump, OpenAI CEO Sam Altman - a gay man, like Apple's Tim Cook (also present at the inauguration) - tweeted his support for the President this week. But legally, a company could now discriminate against him, and against women, ethnic minorities, religious groups, and more. These broad issues, notably the decision by Meta's Mark Zuckerberg to abandon fact-checking and scrap the company's DEI schemes, provided the grim context for a Westminster eForum conference on online safety last week. Indeed, some speakers said that they had torn up their original presentations as they grappled with 2025's policy switchbacks. Outside the US, one of policymakers' biggest problems is the gulf between ambition and delivery. At least, that is the challenge facing the UK at present, according to Professor Sonia Livingstone of the Department of Media and Communications at the London School of Economics and Political Science (LSE). Alas, this is nothing new, she explained. Discussing children's online safety, she said: [For the UK], the legal objective of safety by design just isn't being met. First, [UK communications regulator] Ofcom's approach responds to harm, but doesn't seek to prevent it so much. Not when it comes to live streaming, or the algorithmic amplification of harmful content that is tailored to those with mental health problems, for example. Second, smaller companies are left relatively unburdened by these regulations [the Online Safety Act]. And as we know, children are often pioneers when it comes to the latest, untested new app. So, what about the latest under-the-radar apps for risk-taking? Are we paying attention to Meow Chat, Omigo, MeetMe, and many more? Third, actions to prevent strangers searching for, or contacting, children online are only precluded [sic] on known high-risk platforms, a limitation that many parents find deeply worrying. Just last week, I spent a day with Year Nine children, and I was regaled with stories of being added by 'randoms' on Snapchat. So, many of the children's charities have now joined forces to say the UK's online safety plans are too little, too late. She added: For two decades, there's been a debate in this country about social media responsibility, privacy and safety measures, age-appropriate provision, the moderation of age-inappropriate content, and the prevention of illegal content and contacts. So, the fact that society is still waiting for decisive action twenty years later is astonishing. And policymakers call academics slow! Fair comment. And Professor Livingstone's point applies in other areas of policymaking too - wherever government seeks to rein in vendors' power and limit their overreach, in fact. Often, lawmakers solely worry about the behavior of a handful of US Big Techs. But this ignores the fact that many problems in the digital world are emergent, at the fringes, or at the cutting edge of new apps and services. By the time problematic behaviours have become endemic or embedded in major platforms at scale, it is invariably years too late to tackle them. For example, look no further than the European Union's Digital Markets, Competition, and Consumers Act (DMCCA) 2024 - see diginomica passim. A well-meaning tilt at the power of Big Tech, but, like the Online Safety Act, roughly 20 years too late. Defining 'strategic market status', as the DMCCA does, by vendor turnover of over $25 billion is absurd; in the real world, that translates to incumbent market status. True strategic status would mean addressing whichever platform is defining new behaviours. That would include the dozens of new apps that Livingstone says teenagers are using, plus the likes of Spotify, which dominates how every age group consumes music, yet has revenue of less than $4 billion. Or take ChatGPT maker OpenAI, a de facto internet company. Few would deny that it presently dictates the pace of change in generative AI, yet its 2024 revenues of $4 billion would place it outside the scope of the Act - were governments even to consider that it might be an internet company. In this light, the Act and others like it are the legislative equivalent of hitting a sleeping elephant with a stun dart, while a herd of others are stampeding towards you. But for legislators to move faster on emergent companies and stay ahead of the curve would be almost impossible. And so, the only alternative would be for them to be more restrictive or prescriptive about what is, and isn't, permissible in a digital market. And at a time when regulators have been tasked with enabling growth. Plus, vendors would then claim that their right to innovate is being curtailed. But when it comes to child safety online, and policymakers' desire to protect vulnerable adults too, the big beasts and market incumbents are still more relevant than they would like to be. And that is despite attempts by Meta, X, and others, to absolve themselves of responsibility for harmful behaviours by citing the First Amendment. Professor Livingstone said: Recent events are overtaking the policy process yet again. Mark Zuckerberg has announced the end of independent fact checkers on Facebook and Instagram, which many believe will leave harmful content unchecked, and even amplified. And the future of content moderation - which is central to Ofcom's codes, but now being cancelled by Meta - is uncertain, to say the least. Zuckerberg's rationale is the defence of free speech, but that same argument has recently undone the US' Kids Online Safety Act, California's Age-Appropriate Design Code Act [the UK also has an Age-Appropriate Design Code], the revision of COPA [the US' 1998 Child Online Protection Act], and more. Huge efforts went into each of those initiatives to improve child safety online. They may not have been perfect, but they were surely steps in the right direction. But in the UK now, we urgently need a strategy to address the challenge posed by this use, or misuse, of the US First Amendment." That urgency is only increasing, she explained, with a reported one in 12 children encountering high-risk content online, but with the real figure being probably much higher. And that is not all. Increasingly, harmful or illegal content is live-streamed or ephemeral on messaging platforms, then deleted, meaning it is getting harder and harder to track, she said. At the same time, AI deep fakes and synthetic CSAM are creating new forms of illegal material. Indeed, the sheer volume of AI content means that any wave of illegal stuff will fast become a tsunami. ('Hey kid, wanna see something extreme, to sate your darkest fantasy? Just prompt an AI to make it for you!') This raises further challenges, she explained: What hard evidence [of harms] is expected? Do we expect platforms to just reveal it, especially if it shows the ineffectiveness of their own safety measures? Meanwhile, children have content fed to them by personalized algorithms closely tailored to their preferences, but also to their vulnerabilities, all with the intention of sustaining their undivided, unending attention. So, media literacy must be complemented by effective regulation. In other words, if we don't effectively regulate the tech companies, the effect of awareness-raising will almost be counterproductive. That's because it burdens vulnerable individuals with all the responsibility of coping with the opaque and often exploitative actions of the world's most powerful companies." Those are strong words and ones to which the Professor added an aside that is tragic in its implications, given our hopes for the digital world, for AI, and for the wider online economy: Increasingly when I interview children, I find them to be cautious, rather than ambitious, for their digital lives, telling me they feel alone and unsupported as they try to understand the inexplicable. Let that sink in, as Elon Musk might say. But are all social platforms absolving themselves of blame? It depends on who you ask. Ben Bradley is Government Relations and Public Policy Manager for TikTok, the Chinese short-form video giant embroiled (at the time of writing) in a standoff with US authorities - though CEO Shou Zi Chew was a prominent guest at President Trump's inauguration. He insists: Safety is a top priority for TikTok, and that's from a legal standpoint, an ethical standpoint now with the Online Safety Act, and from a business perspective. As an entertainment platform, it's critical that users can express themselves freely, but they are only able to do that if they feel it is a safe space in which to do so. The work of our trust and safety team is built on a philosophy of building and following the evidence. So, we have teams of childhood safety experts based around the globe, many with academic backgrounds, or backgrounds in NGOs and relevant policy areas, and it's important for them, and to us, that the approach we take is evidence led and effective. But of course beyond a certain point - as Meta has found - scale becomes both a challenge and a massive expense in staffing terms. With a claimed three billion or more active users, Facebook has, implicitly, just given up on policing its platform this year. It now hopes users will police themselves while it counts their money. And as X has sadly demonstrated too, why bother moderating and fact checking posts when you can cut costs by letting the platform criticise itself? This, plus moves in DC to roll back Diversity, Equity, and Inclusion (DEI) measures, has gifted social platforms the opportunity to, frankly, just not bother protecting vulnerable users at all. Aka 'Plead the First and count the money'. With a claimed 1.9 billion users worldwide, how much longer will TikTok be able to invest in human safety experts if indeed it escapes a permanent American ban by being picked up by a US billionaire? That aside, Bradley said that TikTok has learned important lessons from the first wave of social networks and media platforms: The first is the importance of age-appropriate experiences. The second is the role of user empowerment, and third is education and transparency. In terms of age-appropriate experiences, one of the key lessons is the importance of embedding those, and graduating users into the online world at a particular pace. For many years, TikTok has embedded safety by design into our approach, ensuring that the youngest users have the highest levels of protection. So, if you're 13 to 15 on TikTok, there are certain features that are simply unavailable to you. Only your friends can see your videos or comment on them - people who follow you, and who you have followed back, so you've made a mutual connection in that space. We don't recommend your account to other people, plus you don't have access to features like direct messages, you cannot host a live stream, and you don't receive notifications after 9pm. All those are baseline protections for the youngest users. But we acknowledge that if you're 17, then maybe you should have access to more features than if you're 13. That can be read as a pragmatic approach, though a 17-year-old is still a minor. He continued: In terms of age-appropriate material, we are committed to helping the community find educational and informative content from reliable sources. And we're also looking at things from an educational perspective. So, in April, we launched the TikTok STEM feed in the UK. This is a dedicated feed of science, technology, and engineering content that is verified by third parties. That was turned on by default for every user under 18 in April. We've also got a range of partnerships to ensure the continued provision of expert content. For example, in September, we partnered with the World Health Organization [WHO] at the UN General Assembly..." But there, alas, US politics rears its head once again. Among the flurry of policy announcements from President Trump this week was an Executive Order removing the nation from the WHO. It seems that "ensured provision of expert content" is no longer the American way. Being transparent about the efforts we're making so that we can be held to account by civil society and by politicians is important to us. Noble words from Bradley. But what if those politicians no longer care about safety, or about civil society? And what if civil society is tired, disintegrating, and nursing a headache as a result? What then?
[2]
Monday Morning Moan - plead the First, count the money! A new morality for an era screaming out for effective safety regulation
In 2025, few would deny that the technology sector has become more politicized than usual, barely a month into the new year. All it took was a tech CEO, Elon Musk, being given a White House position - and a phalanx of his peers watching the President roll back civil rights - to re-contextualise the entire industry. So, welcome to an 'anything goes' world, where my fear is that technology companies can basically do whatever they please, regardless of any harm - direct or indirect - that might cause people. And all that is happening at a time when the worst aspects of digital innovation are just beginning to have an impact - deep fakes, social hate speech, disinformation, wholesale job automation, spear-phishing scams, synthetic content usurping the human originals... the list goes on. Soon none of us will be able to trust the evidence of our senses. In such a high-risk context, what price does online safety have? And how to protect children, women, minorities, and vulnerable adults online when nobody in power seems to give a damn? Remember, on day one of his Presidency, Trump used an Executive Order to tear up the very concept of safe AI, while industry CEOs appeared to endorse the rolling back of civil rights. That's not an overstatement. A former critic of Trump, OpenAI CEO Sam Altman - a gay man, like Apple's Tim Cook (also present at the inauguration) - tweeted his support for the President this week. But legally, a company could now discriminate against him, and against women, ethnic minorities, religious groups, and more. These broad issues, notably the decision by Meta's Mark Zuckerberg to abandon fact-checking and scrap the company's DEI schemes, provided the grim context for a Westminster eForum conference on online safety last week. Indeed, some speakers said that they had torn up their original presentations as they grappled with 2025's policy switchbacks. Outside the US, one of policymakers' biggest problems is the gulf between ambition and delivery. At least, that is the challenge facing the UK at present, according to Professor Sonia Livingstone of the Department of Media and Communications at the London School of Economics and Political Science (LSE). Alas, this is nothing new, she explained. Discussing children's online safety, she said: [For the UK], the legal objective of safety by design just isn't being met. First, [UK communications regulator] Ofcom's approach responds to harm, but doesn't seek to prevent it so much. Not when it comes to live streaming, or the algorithmic amplification of harmful content that is tailored to those with mental health problems, for example. Second, smaller companies are left relatively unburdened by these regulations [the Online Safety Act]. And as we know, children are often pioneers when it comes to the latest, untested new app. So, what about the latest under-the-radar apps for risk-taking? Are we paying attention to Meow Chat, Omigo, MeetMe, and many more? Third, actions to prevent strangers searching for, or contacting, children online are only precluded [sic] on known high-risk platforms, a limitation that many parents find deeply worrying. Just last week, I spent a day with Year Nine children, and I was regaled with stories of being added by 'randoms' on Snapchat. So, many of the children's charities have now joined forces to say the UK's online safety plans are too little, too late. She added: For two decades, there's been a debate in this country about social media responsibility, privacy and safety measures, age-appropriate provision, the moderation of age-inappropriate content, and the prevention of illegal content and contacts. So, the fact that society is still waiting for decisive action twenty years later is astonishing. And policymakers call academics slow! Fair comment. And Professor Livingstone's point applies in other areas of policymaking too - wherever government seeks to rein in vendors' power and limit their overreach, in fact. Often, lawmakers solely worry about the behavior of a handful of US Big Techs. But this ignores the fact that many problems in the digital world are emergent, at the fringes, or at the cutting edge of new apps and services. By the time problematic behaviours have become endemic or embedded in major platforms at scale, it is invariably years too late to tackle them. For example, look no further than the European Union's Digital Markets, Competition, and Consumers Act (DMCCA) 2024 - see diginomica passim. A well-meaning tilt at the power of Big Tech, but, like the Online Safety Act, roughly 20 years too late. Defining 'strategic market status', as the DMCCA does, by vendor turnover of over $25 billion is absurd; in the real world, that translates to incumbent market status. True strategic status would mean addressing whichever platform is defining new behaviours. That would include the dozens of new apps that Livingstone says teenagers are using, plus the likes of Spotify, which dominates how every age group consumes music, yet has revenue of less than $4 billion. Or take ChatGPT maker OpenAI, a de facto internet company. Few would deny that it presently dictates the pace of change in generative AI, yet its 2024 revenues of $4 billion would place it outside the scope of the Act - were governments even to consider that it might be an internet company. In this light, the Act and others like it are the legislative equivalent of hitting a sleeping elephant with a stun dart, while a herd of others are stampeding towards you. But for legislators to move faster on emergent companies and stay ahead of the curve would be almost impossible. And so, the only alternative would be for them to be more restrictive or prescriptive about what is, and isn't, permissible in a digital market. And at a time when regulators have been tasked with enabling growth. Plus, vendors would then claim that their right to innovate is being curtailed. But when it comes to child safety online, and policymakers' desire to protect vulnerable adults too, the big beasts and market incumbents are still more relevant than they would like to be. And that is despite attempts by Meta, X, and others, to absolve themselves of responsibility for harmful behaviours by citing the First Amendment. Professor Livingstone said: Recent events are overtaking the policy process yet again. Mark Zuckerberg has announced the end of independent fact checkers on Facebook and Instagram, which many believe will leave harmful content unchecked, and even amplified. And the future of content moderation - which is central to Ofcom's codes, but now being cancelled by Meta - is uncertain, to say the least. Zuckerberg's rationale is the defence of free speech, but that same argument has recently undone the US' Kids Online Safety Act, California's Age-Appropriate Design Code Act [the UK also has an Age-Appropriate Design Code], the revision of COPA [the US' 1998 Child Online Protection Act], and more. Huge efforts went into each of those initiatives to improve child safety online. They may not have been perfect, but they were surely steps in the right direction. But in the UK now, we urgently need a strategy to address the challenge posed by this use, or misuse, of the US First Amendment." That urgency is only increasing, she explained, with a reported one in 12 children encountering high-risk content online, but with the real figure being probably much higher. And that is not all. Increasingly, harmful or illegal content is live-streamed or ephemeral on messaging platforms, then deleted, meaning it is getting harder and harder to track, she said. At the same time, AI deep fakes and synthetic CSAM are creating new forms of illegal material. Indeed, the sheer volume of AI content means that any wave of illegal stuff will fast become a tsunami. ('Hey kid, wanna see something extreme, to sate your darkest fantasy? Just prompt an AI to make it for you!') This raises further challenges, she explained: What hard evidence [of harms] is expected? Do we expect platforms to just reveal it, especially if it shows the ineffectiveness of their own safety measures? Meanwhile, children have content fed to them by personalized algorithms closely tailored to their preferences, but also to their vulnerabilities, all with the intention of sustaining their undivided, unending attention. So, media literacy must be complemented by effective regulation. In other words, if we don't effectively regulate the tech companies, the effect of awareness-raising will almost be counterproductive. That's because it burdens vulnerable individuals with all the responsibility of coping with the opaque and often exploitative actions of the world's most powerful companies." Those are strong words and ones to which the Professor added an aside that is tragic in its implications, given our hopes for the digital world, for AI, and for the wider online economy: Increasingly when I interview children, I find them to be cautious, rather than ambitious, for their digital lives, telling me they feel alone and unsupported as they try to understand the inexplicable. Let that sink in, as Elon Musk might say. But are all social platforms absolving themselves of blame? It depends on who you ask. Ben Bradley is Government Relations and Public Policy Manager for TikTok, the Chinese short-form video giant embroiled (at the time of writing) in a standoff with US authorities - though CEO Shou Zi Chew was a prominent guest at President Trump's inauguration. He insists: Safety is a top priority for TikTok, and that's from a legal standpoint, an ethical standpoint now with the Online Safety Act, and from a business perspective. As an entertainment platform, it's critical that users can express themselves freely, but they are only able to do that if they feel it is a safe space in which to do so. The work of our trust and safety team is built on a philosophy of building and following the evidence. So, we have teams of childhood safety experts based around the globe, many with academic backgrounds, or backgrounds in NGOs and relevant policy areas, and it's important for them, and to us, that the approach we take is evidence led and effective. But of course beyond a certain point - as Meta has found - scale becomes both a challenge and a massive expense in staffing terms. With a claimed three billion or more active users, Facebook has, implicitly, just given up on policing its platform this year. It now hopes users will police themselves while it counts their money. And as X has sadly demonstrated too, why bother moderating and fact checking posts when you can cut costs by letting the platform criticise itself? This, plus moves in DC to roll back Diversity, Equity, and Inclusion (DEI) measures, has gifted social platforms the opportunity to, frankly, just not bother protecting vulnerable users at all. Aka 'Plead the First and count the money'. With a claimed 1.9 billion users worldwide, how much longer will TikTok be able to invest in human safety experts if indeed it escapes a permanent American ban by being picked up by a US billionaire? That aside, Bradley said that TikTok has learned important lessons from the first wave of social networks and media platforms: The first is the importance of age-appropriate experiences. The second is the role of user empowerment, and third is education and transparency. In terms of age-appropriate experiences, one of the key lessons is the importance of embedding those, and graduating users into the online world at a particular pace. For many years, TikTok has embedded safety by design into our approach, ensuring that the youngest users have the highest levels of protection. So, if you're 13 to 15 on TikTok, there are certain features that are simply unavailable to you. Only your friends can see your videos or comment on them - people who follow you, and who you have followed back, so you've made a mutual connection in that space. We don't recommend your account to other people, plus you don't have access to features like direct messages, you cannot host a live stream, and you don't receive notifications after 9pm. All those are baseline protections for the youngest users. But we acknowledge that if you're 17, then maybe you should have access to more features than if you're 13. That can be read as a pragmatic approach, though a 17-year-old is still a minor. He continued: In terms of age-appropriate material, we are committed to helping the community find educational and informative content from reliable sources. And we're also looking at things from an educational perspective. So, in April, we launched the TikTok STEM feed in the UK. This is a dedicated feed of science, technology, and engineering content that is verified by third parties. That was turned on by default for every user under 18 in April. We've also got a range of partnerships to ensure the continued provision of expert content. For example, in September, we partnered with the World Health Organization [WHO] at the UN General Assembly..." But there, alas, US politics rears its head once again. Among the flurry of policy announcements from President Trump this week was an Executive Order removing the nation from the WHO. It seems that "ensured provision of expert content" is no longer the American way. Being transparent about the efforts we're making so that we can be held to account by civil society and by politicians is important to us. Noble words from Bradley. But what if those politicians no longer care about safety, or about civil society? And what if civil society is tired, disintegrating, and nursing a headache as a result? What then?
Share
Copy Link
In 2025, the tech sector grapples with political influence, online safety concerns, and inadequate regulations as AI and digital innovations raise new challenges.
In 2025, the technology sector finds itself at a crossroads, grappling with unprecedented political influence and regulatory challenges. The appointment of Elon Musk to a White House position and the rollback of civil rights have dramatically reshaped the industry's landscape 12.
The tech world has entered an 'anything goes' era, where companies seemingly operate without regard for potential harm. This shift occurs as digital innovations like deep fakes, hate speech, disinformation, and job automation begin to show their most detrimental effects. The situation is further exacerbated by actions such as President Trump's Executive Order dismantling safe AI concepts and tech CEOs appearing to endorse the rollback of civil rights 12.
A Westminster eForum conference on online safety highlighted the growing gulf between regulatory ambition and effective implementation. Professor Sonia Livingstone of the London School of Economics pointed out significant shortcomings in the UK's approach to online safety, particularly concerning children:
Professor Livingstone emphasized the frustratingly slow progress in addressing social media responsibility, privacy, and safety measures. Despite two decades of debate, decisive action remains elusive, leaving society vulnerable to evolving digital threats 12.
The European Union's Digital Markets, Competition, and Consumers Act (DMCCA) 2024 exemplifies the shortcomings of current regulatory efforts. By focusing on large, established tech companies based on revenue thresholds, the Act fails to address emerging platforms and technologies that are shaping new online behaviors 12.
OpenAI, despite its significant influence on generative AI, falls outside the scope of regulations like the DMCCA due to its relatively low revenue. This highlights the inadequacy of current regulatory frameworks in addressing rapidly evolving tech landscapes 12.
The article argues that legislators must adopt a more proactive and flexible approach to regulation. Instead of targeting only established tech giants, regulations should focus on emerging platforms and technologies that are defining new online behaviors. However, the challenge lies in balancing the need for timely intervention with the practical limitations of legislative processes 12.
As the tech industry continues to evolve at a breakneck pace, the need for effective, adaptable, and forward-looking regulation becomes increasingly critical. The current state of affairs in 2025 serves as a stark reminder of the consequences of regulatory lag in an era of rapid technological advancement.
Summarized by
Navi
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
13 hrs ago
9 Sources
Technology
13 hrs ago
Google's Made by Google 2025 event showcases the Pixel 10 series, featuring advanced AI capabilities, improved hardware, and ecosystem integrations. The launch includes new smartphones, wearables, and AI-driven features, positioning Google as a strong competitor in the premium device market.
4 Sources
Technology
13 hrs ago
4 Sources
Technology
13 hrs ago
Palo Alto Networks reports impressive Q4 results and forecasts robust growth for fiscal 2026, driven by AI-powered cybersecurity solutions and the strategic acquisition of CyberArk.
6 Sources
Technology
13 hrs ago
6 Sources
Technology
13 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
21 hrs ago
6 Sources
Technology
21 hrs ago
President Trump's plan to deregulate AI development in the US faces a significant challenge from the European Union's comprehensive AI regulations, which could influence global standards and affect American tech companies' operations worldwide.
2 Sources
Policy
6 hrs ago
2 Sources
Policy
6 hrs ago