11 Sources
11 Sources
[1]
X says it will suspend creators from revenue-sharing program for unlabeled AI posts of 'armed conflict' | TechCrunch
X says it's going to take action against creators who post AI videos of armed conflict without disclosure that the content is AI-generated. On Tuesday, X's head of product, Nikita Bier, announced that people who use AI technology to mislead others in this way will be booted from the company's Creator Revenue Sharing Program for a three-month period (90 days). If they continue to post misleading AI content after the suspension lifts, they'll be permanently suspended from the program. "During times of war, it is critical that people have access to authentic information on the ground. With today's AI technologies, it is trivial to create content that can mislead people," Bier wrote on X. "Starting now, users who post AI-generated videos of an armed conflict -- without adding a disclosure that it was made with AI -- will be suspended from Creator Revenue Sharing for 90 days." X says it will identify the misleading posts through a combination of tools that are used to detect generative AI content, as well as through the Community Notes system. X's Creator Revenue Sharing Program offers creators the ability to generate income by posting on the platform, and sharing in advertising revenue if their posts are popular. While designed to boost the amount of engaging content found on X, critics of the program say it incentivizes creators to post sensationalized content, like clickbait or other posts designed to spark outrage. Some have also criticized its lax content controls and its requirement that creators be paid X subscribers to participate. Given how easy it is for AI to be used to make misleading photos and videos, X's ban on financially rewarding creators for type of content is only a limited fix. Outside of war, AI media is often used to create political misinformation or push deceptive products in the influencer economy -- all of which will still be allowed under the new policy.
[2]
X to require AI labels on armed conflict videos from paid creators, citing 'times of war'
X creators from its revenue sharing program if they post AI-generated videos depicting armed conflicts without disclosing they were made with AI. Head of product Nikita Bier announced the policy change on March 3, saying first-time violators will be cut off for 90 days and repeat offenders would be permanently removed from the program. The policy is notably narrow, applying only to creators enrolled in the platform's revenue sharing program and only to AI-generated videos of armed conflicts, not AI content in general or non-monetized accounts. Violations will be flagged through Community Notes, X's crowd-sourced fact-checking system, or by detecting metadata from generative AI tools. Bier framed the change as necessary "during times of war," though the unfolding between the United States, Israel and Iran has not been formally, or at least not legally, declared a war. Of course, the US has not formally declared war . The quality of AI video generation has progressed at a rapid pace, and generated content has become from real footage for most viewers. X already watermarks images and videos generated by its Grok chatbot but has not previously required users to disclose AI-generated content. The platform is separately testing a broader AI labeling toggle that would let users mark any post as containing synthetic content, as by Social Media Today, though X has not shared a timeline for that feature.
[3]
Elon Musk's X Finally Tries to Stop the Epidemic of AI-Generated War Footage
It's unclear if other types of non-AI video will also be penalized. The social media platform X has been flooded with fake photos and videos ever since President Donald Trump launched a new war on Iran last week. But X's head of product Nikita Bier announced a new policy Tuesday that he hopes will disincentivize accounts from sharing AI-generated fakes. At least when the motivation for sharing is purely financial. "Starting now, users who post AI-generated videos of an armed conflictâ€"without adding a disclosure that it was made with AIâ€"will be suspended from Creator Revenue Sharing for 90 days," Bier wrote Tuesday in a post on X. "Subsequent violations will result in a permanent suspension from the program. This will be flagged to us by any post with a Community Note or if the content contains meta data (or other signals) from generative AI tools," Bier continued. It's not immediately clear whether there will be requirements for how large a disclosure may need to appear and whether it needs to be embedded into the video or can be merely included in the text of a tweet. There are plenty of loopholes that X accounts use for impersonation, like making a username so long that the word "parody" only appears if you click through to view a given account's profile. The potential loopholes here also seem endless. Fake photos and videos have gotten millions of views in recent days, ever since the U.S. and Israel launched a war in Iran that has killed Supreme Leader Ayatollah Ali Khamenei and a large number of other Iranian officials. And until Tuesday it showed no signs of slowing down. One of the fake images included a U.S. fighter pilot who was shot down and supposedly mistaken for an Iranian by a Kuwaiti man with a pipe. The image includes the SynthID watermark from Google, meaning it was created using one of Google's generative AI products. But the video has several big red flags that indicate it's been generated with AI, according to BBC disinformation tracker Shayan Sardarizadeh. The most glaring might be the cars on the street, which are in bizarre shapes and don't look like real cars. But there's also the audio, which includes someone off-camera saying "Tel Aviv, I can't believe this," in an unnatural way that's just a little too perfect if you're trying to spread fake information about a specific location. Many X users have asked xAI's Grok whether the video is real and it seems to be consistently responding that it is. One user who shared the video even insisted that it must be real because Grok said so. But Grok is an awful fact-checker and can't be relied upon to tell you whether a video is real, just as it shouldn't be used for anything involving World War II history. This is MechaHitler himself, after all. One question that hasn't been answered by X is whether misleading images and videos that aren't necessarily created with AI will be demonetized. Because there are plenty of other ways to mislead people on social media in a time of war. A popular fake video that's gone viral also purported to show the U.S. embassy in Saudi Arabia going up in flames. The embassy was indeed hit by two Iranian drones on Monday, according to the New York Times, but that's not what's depicted in the video. In reality, the video above is about a month old at minimum, having been posted to YouTube on Feb. 6. It has nothing to do with the current war. While the video appears to be real, it's being misrepresented as something that happened recently. Another video was captioned "An Iranian plane VS a US ship. I can watch this all day," racking up over seven million views. It's actually footage from a video game. Is creating a video clip of game footage and presenting it as current events going to qualify X users for demonetization? There's no sign that it was created using AI, which is the only thing Bier mentioned in his tweet Tuesday. Still another video that gained significant attention supposedly showed the "CIA headquarters" in Dubai with smoke billowing out of it after being hit by Iran. An account that frequently posts disinformation falsely claimed that authorities in the UAE were arresting anyone who shared the footage. But the footage is actually from 2015. Supposedly respectable people can get caught up in sharing these fake images and videos, as we've seen repeatedly in just the past few days. Fox News host Bret Baier shared the fake embassy video and Texas Gov. Greg Abbott shared the video game footage, though both deleted their tweets. X is ground zero for fake photos and videos whenever news breaks, whether about war or any other topic. Elon Musk helped make the disinformation problem worse after he bought Twitter in late 2022, inviting back conspiracy theorists who had previously been banned on the site and stripping so-called legacy verification checkmarks that helped give users a sense of who they could trust. And then generative AI technology made the problem even worse. Musk also introduced the creator revenue sharing program that created incentives for users to get the most attention, whether something was true or not. And Musk himself often shares things that aren't true or are AI, like video of Sydney Sweeney to promote Grok. Musk hasn't weighed in personally on the new policy to demonetize accounts that share AI content without disclosure. Far-right commentator Matt Walsh thought the new X policy didn't go far enough. "Why not suspend anyone who shares any AI content without disclosing that it’s AI?" Walsh asked in a tweet Tuesday. The answer is likely because Musk envisions a world where everything consumers watch on X is AI-generated. The billionaire has said as much in several discussions, including with podcaster Joe Rogan late last year. "Most of what people consume in five or six years, maybe sooner than that, will be just AI-generated content. So music, videos..." Musk said while trailing off. It seems like a good step to demonetize accounts that are sharing AI footage on X that's not appropriately labeled, even if it's a very modest move. But that will only disincentivize users who are sharing because they're trying to rack up views for financial gain. What about accounts that are sharing for different reasons, like trying to influence public opinion or betting markets outside of X's control? Or what if they're sharing just to stir shit? Because in the age of AI, there's basically no hurdle for creating an endless supply of fake content, as Bier seemed to acknowledge on Tuesday. "During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people," Bier tweeted. X didn't respond to questions emailed Tuesday, which isn't a surprise. The company has a history of being hostile to journalists since Musk purchased Twitter. Gizmodo will update this article if we hear back.
[4]
X Will Stop Paying People for Sharing Unlabeled AI-Generated War Footage
Fake war footage is a problem as old as social media. AI has just supercharged it. X said it will temporarily demonetize accounts that share AI-generated war footage without a label. The decision comes days after the US and Israel launched airstrikes in Iran and AI-slop war footage flooded social media timelines across the internet. "Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program. During times of war, it is critical that people have access to authentic information on the ground. With today's AI technologies, it is trivial to create content that can mislead people," Nikita Bier, X's head of product, said in a post on X. Many of the AI-generated videos currently on X purport to show Iranian ballistic missiles hitting sites in Israel. One video shared thousands of times on X showed missiles slamming into the ground near the Dome of the Rock in Jerusalem while a computer generated voice said "Oh my god, hear they come." X users community noted the video, but the account that shared it has a Bluecheck and is eligible for a financial payout for engagement as part of X's content creator program. Bier said today that X will stop people from making money on unlabeled AI war footage, but won't stop accounts from sharing it. "Starting now, users who post AI-generated videos of an armed conflict -- without adding a disclosure that it was made with AI -- will be suspended from Creator Revenue Sharing for 90 days. Subsequent violations will result in a permanent suspension from the program," he added. "This will be flagged to us by any post with a Community Note or if the content contains meta data (or other signals) from generative AI tools. We will continue to refine our policies and product to ensure X can be trusted during these critical moments." Fake war footage shared on social media isn't a new problem. For several years every new conflict would be met with a flood of fake videos. Old war footage passed off as coming from the current war was popular, but so was recordings of video games run through filters to make it look low-resolution. The same three clips from milsim video game Arma 3 were shared at the outbreak of every new conflict for a decade. The Government of Pakistan even shared Arma 3 footage once in a post that's still live on X. What is new is the proliferation of easy to use AI video-generation tools. AI image and video generation has come a long way in the past few years and it's trivially easy to remove the watermark that's supposed to distinguish them from the real thing. X's verification system -- which rewards accounts for engagement -- has also created incentives for Bluecheck accounts to publish fast, verify later (if ever), and rake in the cash. So in the hours and days after the war with Iran began, fake footage of airstrikes and conflict spread on X. The way X is handling the problem gives the game away. According to Bier, the site will rely on the community to police itself and the punishment is a 90 day suspension not from the site but from the monetization program.
[5]
X to ban users from earning revenue if they post unlabelled AI-generated war videos
Social media feeds have been flooded with fake battle scenes since start of Iran conflict Elon Musk's X will ban users from making money on the platform if they repeatedly post unlabelled AI-generated war videos, after social media feeds were flooded with fake battle scenes from the Iran conflict. The social media platform, which has about half a billion monthly active users, will suspend people from earning revenue from posts for 90 days if they put up AI-generated videos of an armed conflict without adding a disclosure that it was made with AI. A second infraction will lead to a permanent ban, it said on Tuesday night, after the first days of the conflict in Iran were marked by a torrent of bogus online footage. Timelines on X, as well as Instagram and Facebook, which are run by Meta, have carried numerous faked battle scenes, including Iranian rockets pursuing and shooting down a US jet - which was viewed 70m times, according to checks by BBC Verify - and another clip that used AI to replace smoke rising from the site of a real missile strike with a fake fireball several times bigger. Users can make hundreds of dollars a month on X as part of the platform's advertising model if they build substantial followings approaching 100,000 people, which incentivises the production of shocking viral posts. "During times of war, it is critical that people have access to authentic information on the ground," said Nikita Bier, the head of product at X. "With today's AI technologies, it is trivial to create content that can mislead people. Starting now, users who post AI-generated videos of an armed conflict - without adding a disclosure that it was made with AI - will be suspended from creator revenue sharing for 90 days. Subsequent violations will result in a permanent suspension from the program." Other fake videos of the war have achieved huge reach. A clip circulating on Instagram purporting to show a huge conflagration after "Iran destroyed the US airbase in Riyadh" was fake and has been identified as 18-month-old footage of the aftermath of an Israeli strike on an oil refinery in Hodeidah in Yemen. Full Fact, the UK fact-checking organisation, said it is "increasingly seeing AI turbocharge the spread of misinformation on social media". "In the last few days we've seen lots of examples of AI images shared across different social media platforms as if they are real, including fake pictures of an aircraft carrier and the Burj Khalifa on fire, and an image supposedly showing the body of Ayatollah Khamenei," said Steve Nowottny, Full Fact's editor. "Even when AI images seem low quality, or still have a visible watermark on them, we often see them shared at scale - and the sheer volume of this fake content and the ease with which it is generated and spreads is a real concern."
[6]
X Warns Against Creator Payouts Over Undisclosed AI War Videos - Decrypt
Researchers and governments have warned that deepfakes could spread propaganda and misinformation online. Elon Musk's social media platform X said it will suspend creators from its revenue-sharing program if they post AI-generated videos depicting armed conflict without clearly disclosing that the footage was created using artificial intelligence. In a post on Tuesday, X's head of product Nikita Bier said the company is revising its Creator Revenue Sharing policies to maintain authenticity on the platform's timeline and "prevent manipulation of the program." "During times of war, it is critical that people have access to authentic information on the ground," Bier wrote. "With today's AI technologies, it is trivial to create content that can mislead people." Creators who violate the rule will lose access to the platform's Creator Revenue Sharing program for 90 days, Bier wrote. Repeat violations will lead to permanent removal from the monetization program. The policy change comes as AI-generated videos claiming to show scenes of escalating violence in the Middle East following missile strikes by the U.S., Israel, and Iran last week. On Monday, an AI-generated clip on X showing an airstrike on the Burj Khalifa in Dubai was viewed over 8 million times; at the same time, another version of the clip was viewed over 42,000 times on Instagram. The United Nations has warned that deepfakes and AI-generated media threaten information integrity, particularly in conflict zones where fabricated images or videos can spread hate or misinformation at scale. This concern was realized during Russia's invasion of Ukraine, a deepfake video circulated online appearing to show Ukrainian President Volodymyr Zelensky urging Ukrainian troops to surrender. Officials quickly debunked the video, and Zelensky later released a message rejecting the claim. According to Bier, enforcement will rely on several signals, including posts that receive a Community Note identifying the video as AI-generated, along with metadata or other indicators suggesting the footage was produced using generative AI tools. By tying enforcement to monetization, X's policy focuses specifically on the financial incentives creators have to post fake videos that drive clicks and views. "We will continue to refine our policies and product to ensure X can be trusted during these critical moments," Bier wrote.
[7]
X suspends revenue sharing for undisclosed AI war videos
Washington (United States) (AFP) - Social media platform X announced Tuesday it would suspend creators from its revenue sharing program for 90 days if they post AI-generated videos of armed conflicts without disclosing they were artificially made, the company said. The policy change, announced by an executive of the Elon Musk-owned platform, targets what the company described as a threat to information authenticity amid the ongoing war pitting the US and Israel against Iran. "During times of war, it is critical that people have access to authentic information on the ground," X's head of product Nikita Bier said, adding that current AI technologies make it "trivial to create content that can mislead people." X said Monday it would "continue to refine" its policies and product to ensure the platform "can be trusted during these critical moments." The new AI disclosure policy represents a notable pivot for a platform whose approach to content moderation has been heavily criticized since Musk completed his $44 billion acquisition of Twitter -- subsequently rebranded as X -- in October 2022. Since Musk's takeover, X has largely sought to remove its policies against misinformation deeming them censorship. Under the new rules, repeat offenders face permanent suspension from the Creator Revenue Sharing program, which pays eligible users a share of advertising revenue generated by their posts. Violations will be flagged through Community Notes -- the platform's crowd-sourced fact-checking system -- as well as through metadata and other technical signals embedded in AI-generated content.
[8]
X Targets Undisclosed AI Conflict Videos With Revenue Ban
Creators posting AI-generated war footage without disclosure risk losing access to X's revenue-sharing program for three months. Social media platform X will suspend creators from its revenue-sharing program for 90 days if they post artificial intelligence-generated videos depicting armed conflict without clearly disclosing that the content was created with AI. On Wednesday, X's head of product, Nikita Bier, said the rule aims to maintain "authenticity of content on Timeline" during wartime events, when misleading media can spread quickly. "During times of war, it is critical that people have access to authentic information on the ground," Bier wrote. "With today's AI technologies, it is trivial to create content that can mislead people." Related: Bitcoin holders show 'zero panic' as BTC hits $70K amid Middle East tensions The move adds financial penalties to X's existing moderation tools, linking disclosure of AI-generated media to monetization eligibility. Unlike traditional moderation measures such as labels or removals, the new rule targets the platform's creator economy by restricting access to revenue-sharing for policy violations. X said creators who publish AI-generated conflict footage must clearly disclose that the content was created with artificial intelligence. Failure to do so could lead to a 90-day suspension from the program. Related: 6 Polymarket traders net $1M on US-Iran strike, spark insider fears: Report Under the update, posts flagged by Community Notes or detected through metadata or other signals from generative AI tools may trigger enforcement. Accounts that repeatedly post undisclosed AI-generated conflict videos may face permanent removal from X's creator revenue-sharing program. The policy applies specifically to videos depicting armed conflicts and does not amount to a broader ban on AI-generated content posted to the platform. The announcement comes as geopolitical tensions in the Middle East continue to dominate online discussions across social media platforms. On Feb. 28, the United States and Israel launched joint airstrikes on Iran. Bitcoin (BTC) briefly dropped to about $63,000 but later recovered. At the time of writing, it traded near $70,000, according to CoinGecko. AI is also becoming more deeply embedded in modern conflict environments. On March 1, the US military used Anthropic's Claude AI model to assist with intelligence analysis and targeting during operations linked to the Iran strikes.
[9]
X creators face 90-day pay suspension for unlabeled AI war clips
X head of product Nikita Bier announced a policy change requiring AI labels on AI-generated videos of armed conflicts for creators in the platform's revenue sharing program. The policy targets the authenticity of content during active conflicts, specifically addressing the rapid advancement of AI video generation quality. It applies only to monetized creators and focuses exclusively on armed conflict footage, excluding general AI content or non-monetized accounts. First-time violators will be suspended from revenue sharing for 90 days, Bier stated. Repeat offenders will be permanently removed from the program. Violations will be flagged through Community Notes or by detecting metadata from generative AI tools. The platform already watermarks images and videos generated by its Grok chatbot but has not previously required user disclosure of AI-generated content. Bier cited the need for authentic information "during times of war." He noted the current U.S.-Israel-Iran conflict has not been formally or legally declared a war, and the U.S. has not formally declared war since 1942. X is testing a broader AI labeling toggle to let users mark any post as containing synthetic content. Social Media Today first reported on the feature, though X has not shared a timeline for its release.
[10]
Elon Musk's X has a new creator policy to curb AI misinformation on Iran war. Check details
X, formerly Twitter, is cracking down on undisclosed AI-generated war videos. Creators failing to label synthetic footage depicting armed conflicts risk a 90-day suspension from the platform's revenue-sharing program for a first offense, with repeat offenders facing permanent removal. This move aims to combat misinformation during sensitive times. In a bid to curb misinformation, Elon Musk's social media platform X (formerly Twitter) has announced changes to its creator policy. Aimed at content generated using artificial intelligence, the update focuses on creators who upload AI-generated videos depicting armed conflicts without clearly stating that the footage is artificially created. Under the revised rules, creators who fail to disclose that war-related videos are AI-generated risk losing their ability to earn through the platform. In some cases, they could also face a permanent ban from X's Creator Revenue Sharing programme. Effective immediately, users who post AI-generated videos of armed conflicts without labeling them as AI-made will be suspended from the platform's revenue-sharing programme for 90 days for their first violation. If the same creator repeats the offense, they could be permanently removed from the monetisation programme. The policy update was announced by Nikita Bier, X's head of product, in a post on the platform. Bier said the move is intended to protect the authenticity of information shared online, especially during wartime when misleading content can spread rapidly. According to the platform, undisclosed AI-generated content will be identified using multiple detection methods: Community Notes: X's crowdsourced fact-checking feature will help flag misleading or synthetic content. Metadata analysis: Technical information embedded within media files will be examined to identify AI-generated material. AI detection signals: Additional technical indicators commonly present in generative AI videos will also be used. The update sends a clear message to the millions of creators who earn through X's monetisation system: any AI-generated footage related to armed conflicts must be clearly labeled. Failure to disclose that such content is AI-made could lead to suspension from the platform's revenue-sharing programme or even permanent removal. (You can now subscribe to our Economic Times WhatsApp channel)
[11]
Elon Musk's X Updates Creator Payout Policy, Targets AI Deepfakes in Wartime
Elon Musk's X (formerly Twitter) has updated its creator payout rules. The news comes as the Middle East conflict between the US, Iran, and Israel escalates. The new rules aim to curtail AI-generated war content as online traffic surges, and concerns grow over misleading content. Under the update, creators who post AI-made videos of armed conflicts without proper disclosure will be suspended. They will face a 90-day suspension from monetisation. X has revised its Creator Revenue Sharing programme to prevent misuse during wartime. Other updates include, in case of repeated violation of the above mentioned rule by the same account, X will permanently remove it from the revenue-sharing program. The company said such posts may be flagged through Community Notes or identified through metadata and other signals from AI tools. The goal is to stop creators from earning money through misinformation or unclear war footage.
Share
Share
Copy Link
X announced it will temporarily demonetize creators who post AI-generated videos of armed conflict without proper disclosure. The policy targets unlabeled AI-generated war footage that flooded the platform following recent U.S. and Israeli airstrikes in Iran, with fake videos racking up tens of millions of views and spreading misinformation during critical moments.
X announced a significant policy shift targeting creators who post AI-generated content depicting armed conflict without proper disclosure. Nikita Bier, X's head of product, revealed on March 3 that users who share unlabeled AI-generated war footage will face a 90-day suspension from the platform's Creator Revenue Sharing Program
1
. The decision comes as social media platforms, including those operated by Meta, have been flooded with fake battle scenes following the recent Iran conflict5
.
Source: ET
The policy applies specifically to creators enrolled in X's monetization program who post AI-generated videos of armed conflicts. Repeat offenders will face a permanent suspension from the revenue-sharing initiative
2
. Bier emphasized that "during times of war, it is critical that people have access to authentic information on the ground," noting that with today's video generation tools, "it is trivial to create content that can mislead people"1
.To enforce AI content disclosure requirements, X will rely on a combination of detection methods. The platform plans to identify violations through Community Notes, its crowd-sourced fact-checking system, as well as by detecting metadata and other signals from generative AI tools
3
. This dual approach aims to catch both obvious AI-generated synthetic content and more sophisticated fakes that might slip past casual observation.The enforcement mechanism will demonetize accounts rather than remove them entirely from the platform. According to the policy, X will suspend violators from earning advertising revenue but won't prevent them from continuing to post
4
. This approach reflects the platform's attempt to balance content authenticity concerns with its commitment to relatively open speech policies under Elon Musk's ownership.The policy change responds to a massive wave of disinformation that erupted after the United States and Israel launched airstrikes in Iran. One AI-generated video purporting to show Iranian missiles pursuing and shooting down a U.S. jet was viewed 70 million times, according to BBC Verify
5
. Another viral video showed fake missiles slamming into the ground near the Dome of the Rock in Jerusalem, complete with a computer-generated voice saying "Oh my god, here they come"4
.
Source: Gizmodo
Fake war footage isn't new to social media, but AI has supercharged the problem
4
. Full Fact, a UK fact-checking organization, noted it is "increasingly seeing AI turbocharge the spread of misinformation on social media," pointing to fake images of aircraft carriers and the Burj Khalifa on fire, as well as fabricated images supposedly showing the body of Ayatollah Ali Khamenei5
.Related Stories
The policy's narrow focus has drawn attention to significant gaps in X's approach. It applies only to AI-generated videos of armed conflicts, not AI content in general, and only affects creators enrolled in the Creator Revenue Sharing Program
2
. Non-monetized accounts can continue posting unlabeled AI-generated war footage without consequence.Critics note the policy doesn't address other forms of misleading content. Videos from video games passed off as real combat footage, old war clips misrepresented as current events, and AI-generated political misinformation will remain outside the policy's reach
1
3
. Even supposedly respectable figures have been caught sharing fake content, with Fox News host Bret Baier and Texas Governor Greg Abbott both sharing misleading war videos before deleting their posts3
.X's Creator Revenue Sharing Program allows users to earn hundreds of dollars monthly by building followings approaching 100,000 people, creating strong incentives for shocking viral posts
5
. Critics argue the program incentivizes sensationalized content, including clickbait and outrage-generating posts, while maintaining lax content controls1
.
Source: Cointelegraph
The platform is separately testing a broader AI labeling toggle that would let users mark any post as containing synthetic content, though X hasn't shared a timeline for that feature
2
. X already watermarks images and videos generated by its Grok chatbot, but Grok itself has proven unreliable as a fact-checker, with users reporting it confirmed fake videos as authentic3
. As AI-generated content becomes increasingly difficult to distinguish from authentic footage, the effectiveness of X's limited enforcement approach remains uncertain, particularly as the platform continues to serve as ground zero for misinformation whenever breaking news unfolds.Summarized by
Navi
[1]
[2]
21 Jun 2025•Technology

29 Jan 2026•Technology

02 Jul 2025•Technology

1
Business and Economy

2
Policy and Regulation

3
Health
