15 Sources
[1]
X is piloting a program that lets AI chatbots generate Community Notes | TechCrunch
The social platform X will pilot a feature that allows AI chatbots to generate Community Notes. Community Notes is a Twitter-era feature that Elon Musk has expanded under his ownership of the service, now called X. Users who are part of this fact-checking program can contribute comments that add context to certain posts, which are then checked by other users before they appear attached to a post. A Community Note may appear, for example, on a post of an AI-generated video that is not clear about its synthetic origins, or as an addendum to a misleading post from a politician. Notes become public when they achieve consensus between groups that have historically disagreed on past ratings. Community Notes have been successful enough on X to inspire Meta, TikTok, and YouTube to pursue similar initiatives -- Meta eliminated its third-party fact-checking programs altogether in exchange for this low-cost, community-sourced labor. But it remains to be seen if the use of AI chatbots as fact-checkers will prove helpful or harmful. These AI notes can be generated using X's Grok or by using other AI tools and connecting them to X via an API. Any note that an AI submits will be treated the same as a note submitted by a person, which means that it will go through the same vetting process to encourage accuracy. The use of AI in fact-checking seems dubious, given how common it is for AIs to hallucinate, or make up context that is not based in reality. According to a paper published this week by researchers working on X Community Notes, it is recommended that humans and LLMs work in tandem. Human feedback can enhance AI note generation through reinforcement learning, with human note raters remaining as a final check before notes are published. "The goal is not to create an AI assistant that tells users what to think, but to build an ecosystem that empowers humans to think more critically and understand the world better," the paper says. "LLMs and humans can work together in a virtuous loop." Even with human checks, there is still a risk to relying too heavily on AI, especially since users will be able to embed LLMs from third parties. OpenAI's ChatGPT, for example, recently experienced issues with a model being overly sycophantic. If an LLM prioritizes "helpfulness" over accurately completing a fact-check, then the AI-generated comments may end up being flat out inaccurate. There's also concern that human raters will be overloaded by the amount of AI-generated comments, lowering their motivation to adequately complete this volunteer work. Users shouldn't expect to see AI-generated Community Notes yet -- X plans to test these AI contributions for a few weeks before rolling them out more broadly if they're successful.
[2]
X opens up to Community Notes written by AI bots
Jay Peters is a news editor covering technology, gaming, and more. He joined The Verge in 2019 after nearly two years at Techmeme. X is launching a way for developers to create AI bots that can write Community Notes that can potentially appear on posts. Like humans, the "AI Note Writers" will be able to submit a Community Note, but they will only actually be shown on a post "if found helpful by people from different perspectives," X says in a post on its Community Notes account. Notes written by AI will be "clearly marked for users" and, to start, "AIs can only write notes on posts where people have requested a note." AI Note Writers must also "earn the ability to write notes," and they can "gain and lose capabilities over time based on how helpful their notes are to people from different perspectives," according to a support page. The AI bots start writing notes in "test mode," and the company says it will "admit a first cohort" of them later this month so that their notes can appear on X. These bots "can help deliver a lot more notes faster with less work, but ultimately the decision on what's helpful enough to show still comes down to humans," X's Keith Coleman tells Bloomberg in an interview. "So we think that combination is incredibly powerful." Coleman says there are "hundreds" of notes published on X each day.
[3]
X Will Let AI Write Community Notes. What Could Go Wrong?
(Credit: Sheldon Cooper/SOPA Images/LightRocket via Getty Images) X's Community Notes feature lets people publicly flag inaccurate or misleading tweets, so incorporating AI-generated responses might seem counterintuitive, but that's exactly what the Elon Musk-owned platform is doing. X's new AI Note Writer API allows developers to create bots that can submit Community Notes. It's available in beta right now; X will begin accepting the first set of AI Note Writers later this month, after which AI-generated notes will start appearing on the platform. At first, these bots will only add notes to posts where users have requested a Community Note. The notes will show up only if enough human contributors find them helpful. To tell them apart, AI-written notes will be marked as such. AI Note Writers will be held to the same standards as human contributors. "Like all contributors, they must earn the ability to write notes and can gain and lose capabilities over time based on how helpful their notes are to people from different perspectives," X says. One big difference between AI note writers and human contributors is that the former can't rate notes. "The idea is that AI Note Writers can help humans by proposing notes on misleading content, while humans still decide what's helpful enough to show," X adds. Community Notes was introduced on Twitter in 2022 to fight misinformation. Given X's recent struggles with disinformation, some users might be surprised to see the Elon Musk-led platform consider bots for Community Notes. After all, AI models do have a tendency to hallucinate.
[4]
X will let AI write Community Notes
In what was probably an inevitable conclusion, X has that it will allow AI to author Community Notes. With a pilot program beginning today, the social network is releasing developer tools to create AI Note Writers. These tools will be limited to penning replies in a test mode and will need approval before their notes can be released into the wild. The first AI Note Writers will be accepted later this month, which is when the AI-composed notes will start appearing to users. "Not only does this have the potential to accelerate the speed and scale of Community Notes, rating feedback from the community can help develop AI agents that deliver increasingly accurate, less biased, and broadly helpful information -- a powerful feedback loop," the post announcing this feature said. Sounds great. . The AI Note Writers will be by "an open-source, automated note evaluator" that assesses whether the composition is on-topic and whether it would be seen as harassment or abuse. The evaluator's decisions are based "on historical input from Community Notes contributors." Despite the announcement's insistence of "humans still in charge," it seems the only human editorial eye comes from the ratings on notes. Once the AI-written notes are active, they will be labeled as such as a transparency measure. AI will only be allowed to offer notes on posts that have requested a Community Note at the start, but the company is positioning AI Note Writers as having a larger future role in this fact-checking system.
[5]
Elon Musk's X Is Turning Community Notes Over to AI
Artificial intelligence chatbots are known for regularly offering dubious information and hallucinated details, making them terrible prospects for the role of fact-checker. And yet, Elon Musk’s X (née Twitter) plans to deploy AI agents to help fill in the gaps on the notoriously slow-reacting Community Notes, with the AI-generated notes appearing as soon as this month. What could possibly go wrong? The new model will allow developers to submit AI agents to be reviewed by the company, according to a public announcement. The agents will be tested behind the scenes, made to write practice notes to see how they will perform. If they offer useful information, they'll get the green light to go live on the platform and write notes. Those notes will still have to get the approval of the human reviewers on Community Notes, and they still need to be found useful by people with a variety of viewpoints (how that metric is determined is a bit opaque). Developers submitting their own agents can be powered by an AI model, according to Bloomberg, so users won't be locked into Grok despite the direct ties to Musk (perhaps because Musk simply cannot stop Grok from being woke, no matter how hard he tries). The expectation from the company is that the AI-generated notes will significantly increase the number of notes being published on the platform. They kinda need the AI for that, because human-generated notes have reportedly fallen off a cliff. An NBC News story published last month found that the number of notes published on the platform was cut in half from 120,000 in January to just 60,000 in May of 2025. There are fewer people submitting notes, fewer people rating them, and fewer notes being displayed. Basically, the engagement with the fact-checking service has collapsed. There are likely a number of factors for that. For one, the platform is kind of a shit show. A Bloomberg analysis found that it takes about 14 hours to get a note attached to a post with false or misleading information, basically after its primary viral cycle passes. Disagreements among Community Notes contributors have also led to fact-checks failing to get published, and about one in four get pulled after being published due to dissent among raters. That figure gets even higher when related to actively contentious issues like Russia's invasion of Ukraine, which saw more than 40% of published notes eventually taken down. And then there's the guy who owns the site who, despite actively promoting Community Notes as a big fix for misinformation, has spent more and more time shitting on it. Earlier this year, Musk claimed, without providing evidence, that Community Notes could be gamed by government actors and legacy media, instilling distrust in the entire process. You know what isn't going to make the system harder to game? Unleashing bots on the problem.
[6]
X is letting AI bots fact-check your tweets -- what could possibly go wrong?
Elon Musk's social media platform X is taking initiative when it comes to fighting misinformation: it's giving artificial intelligence the power to write Community Notes; those are the fact-checking blurbs that add context to viral posts. And while humans still get the final say, this shift could change how truth is policed online. Here's what's happening, and why it matters to anyone who scrolls X (formerly Twitter). X is currently piloting a program that lets AI bots draft Community Notes. Third-party developers can apply to build these bots, and if the AI passes a series of "practice note" tests, it may be allowed to submit real-time fact-checking content to public posts. Human review isn't going away. Before a note appears on a post, it still needs to be rated "helpful" by a diverse group of real users and given proper oversight. That's how X's Community Notes system has worked from the start, and it remains in place even with bots in the mix (for now). The goal is speed and scale. Right now, hundreds of human-written notes are published daily. But AI could push that number much higher, especially during major news events when misleading posts spread faster than humans can keep up. Can we trust AI to handle accuracy? Yes, bots can flag misinformation fast, but generative AI is far from perfect. Language models can hallucinate, misinterpret tone, or misquote sources. That's why the human voting layer is so important. Still, if the volume of AI-drafted notes overwhelms reviewers, bad information could slip through. X isn't the only platform using community-based fact-checking. Reddit, Facebook and TikTok have also explored similar systems. But automating the writing of those notes is a first, opening up a bigger question about whether we are ready to hand over our trust in bots. Musk has publicly criticized the system when it clashes with his views. Letting AI into the process raises the stakes: it could supercharge the fight against misinformation, or become a new vector for bias and error. The AI Notes feature is still in testing mode, but X says it could roll out later this month. For this to work, transparency is key, with a hybrid approach of human and bot working together. One of the strengths of Community Notes is that they don't feel condescending or corporate. AI could change that. Studies show that Community Notes reduce the spread of misinformation by as much as 60%. But speed has always been a challenge. This hybrid approach, AI for scale, humans for oversight, could strike a new balance. X is trying something no other major platform has attempted: scaling context with AI, without (fully) removing the human element. If it succeeds, it could become a new model for how truth is maintained online. If it fails, it could flood the platform with confusing or biased notes. Either way, this is a glimpse into the future of what information looks like in your feed and encourages asking the question of how much you can trust AI.
[7]
X is about to let AI fact-check your posts
Not sure whether that video you just saw on X (formerly Twitter) is real or AI-generated? Don't worry, AI will now let you know what's what. Elon Musk's X is running a new pilot program that allows AI chatbots to generate Community Notes on the platform. Community Notes are X/Twitter's version of fact checking, in which people (and now, AI bots) can add context to posts and highlight fake news (or at least dubious information) inside a post. Adweek, which first reported on the news, says that X is doing this to help Community Notes scale. "Our focus has always been on increasing the number of notes getting out there. And we think that AI could be a potentially good way to do this," Keith Coleman, X's VP of product and head of Community Notes, told the outlet. The AI-written notes will get the same treatment as human-written notes, the report claims. They will be rated by humans in order to validate their accuracy, and human users will have to flag them as "helpful" before they're displayed to all X users. The program isn't intended as a replacement for human-written Community Notes. Instead, Coleman says that both types of notes will be "very additive." The pilot officially kicked off on July 1, but notes will start appearing to everyday users in a few weeks. X's Community Notes have been hit and miss for the company so far. Elon Musk himself argued they needed a "fix," and there are indications that he removed some of the Community Notes that appeared under his own posts. The use of Community Notes has fallen off sharply in recent months. The addition of AI-written notes might increase the number of notes shown on X, though it remains to be seen what it'll do to their quality.
[8]
Fears AI factcheckers on X could increase promotion of conspiracy theories
Social media site will use AI chatbots to draft factcheck notes but experts worry they could amplify misinformation A decision by Elon Musk's X social media platform to enlist artificial intelligence chatbots to draft factchecks risks increasing the promotion of "lies and conspiracy theories", a former UK technology minister has warned. Damian Collins accused Musk's firm of "leaving it to bots to edit the news" after X announced on Tuesday that it would allow large language modelsto write community notes to clarify or correct contentious posts, before they are approved for publication by users. The notes have previously been written by humans. X said using AI to write factchecking notes - which sit beneath some X posts - "advances the state of the art in improving information quality on the internet". Keith Coleman, the vice president of product at X, said humans would review AI-generated notes and the note would appear only if people with a variety of viewpoints found it useful. "We designed this pilot to be AI helping humans, with humans deciding," he said. "We believe this can deliver both high quality and high trust. Additionally we published a paper along with the launch of our pilot, co-authored with professors and researchers from MIT, University of Washington, Harvard and Stanford laying out why this combination of AI and humans is such a promising direction." But Collins said the system was already open to abuse and that AI agents working on community notes could allow "the industrial manipulation of what people see and decide to trust" on the platform, which has about 600 million users. It is the latest pushback against human factcheckers by US tech firms. Last month Google said user-created fact checks, including by professional factchecking organisations, would be deprioritised in its search results. It said such checks were "no longer providing significant additional value for users". In January, Meta announced it was scrapping human factcheckers in the US and would adopt its own community notes system on Instagram, Facebook and Threads. X's research paper outlining its new factchecking system criticised professional factchecking as often slow and limited in scale and said it "lacks trust by large sections of the public". AI-created community notes "have the potential to be faster to produce, less effort to generate, and of high quality", it said. Human and AI-written notes would be submitted into the same pool and X users would vote for which were most useful and should appear on the platform. AI would draft "a neutral well-evidenced summary", the research paper said. Trust in community notes "stems not from who drafts the notes, but from the people that evaluate them," it said. But Andy Dudfield, the head of AI at the UK factchecking organisation Full Fact, said: "These plans risk increasing the already significant burden on human reviewers to check even more draft Notes, opening the door to a worrying and plausible situation in which Notes could be drafted, reviewed, and published entirely by AI without the careful consideration that human input provides." Samuel Stockwell, a research associate at the Centre for Emerging Technology and Security at the Alan Turing Institute, said: "AI can help factcheckers process the huge volumes of claims flowing daily through social media, but much will depend on the quality of safeguards X puts in place against the risk that these AI 'note writers' could hallucinate and amplify misinformation in their outputs. AI chatbots often struggle with nuance and context, but are good at confidently providing answers that sound persuasive even when untrue. That could be a dangerous combination if not effectively addressed by the platform." Researchers have found that people perceived human-authored community notes as significantly more trustworthy than simple misinformation flags. An analysis of several hundred misleading posts on X in the run up to last year's presidential election found that in three-quarters of cases, accurate community notes were not being displayed, indicating they were not being upvoted by users. These misleading posts, including claims that Democrats were importing illegal voters and the 2020 presidential election was stolen, amassed more than 2bn views, according to the Centre for Countering Digital Hate.
[9]
Bots writing community notes? X thinks it's a good idea
Elon Musk's X is gearing up to let AI chatbots take a stab at fact-checking, piloting a new feature that allows bots to generate Community Notes. Community Notes, a Twitter-era invention that Musk expanded after buying the platform, lets select users add context to posts -- whether it's clarifying a misleading political claim or pointing out that a viral AI-generated video is, in fact, not real life. Notes only appear after they achieve consensus among users who've historically disagreed on past ratings, a system designed to ensure balance and accuracy. Now, X wants AI in on the action. Keith Coleman, the product executive overseeing Community Notes, told Bloomberg that developers will soon be able to submit their AI agents for approval. The bots will write practice notes behind the scenes, and if they're deemed helpful (and presumably not hallucinating wildly), they'll be promoted to public fact-checker. "They can help deliver a lot more notes faster with less work, but ultimately the decision on what's helpful enough to show still comes down to humans," Coleman said. "So we think that combination is incredibly powerful." Translation: the bots will churn it out, but humans will still be left holding the final 'post' button. The AI agents can use X's own Grok chatbot or any other large language model connected via API. AI's track record for getting facts right is also -- spotty. Models often "hallucinate," confidently making up information that sounds accurate but isn't. Even with human checks, there are concerns that an influx of AI-generated notes will overwhelm volunteer reviewers, making them less effective at catching mistakes. There's also the risk that AI bots will prioritize sounding polite and helpful over actually correcting misinformation. Recent issues with ChatGPT being overly sycophantic illustrate that, yes, bots want to please you -- facts optional. For now, users won't see AI-generated Community Notes just yet. X plans to test the bots quietly behind the scenes for a few weeks before deciding whether to unleash them onto the timeline.
[10]
AI to Write Community Notes for Fact Checking on X | AIM
An AI will check for misleading content on the social media platform. X, the social media platform run by Elon Musk, has launched a pilot for an "AI Notes Writer," a new API allowing developers to build AI systems that propose fact-checking notes on posts, with final judgment still in human hands. The initiative builds on X's existing Community Notes feature, where crowdsourced fact-checks are surfaced only if users from differing political perspectives rate them as helpful. AI Note Writers follow the same rules: they must earn credibility through helpful contributions and cannot rate others' notes. Their role is limited to proposing context, especially on posts flagged by users requesting notes. "This has the potential to accelerate the speed and scale of Community Notes," the company said on X, emphasising that humans will remain in control of what ultimately gets shown. To participate, developers need to sign up for both the X API and the AI Note Writer API. Each AI Note Writer must pass an admission threshold based on feedback from an open-source evaluator trained on historical contributor data. Only notes from admitted AI writers can be surfaced to the broader community. The company mentions that one can use GitHub actions and Grok or other third-party LLMs to build the AI Note Writer. At launch, AI-written notes will be marked distinctly and held to the same transparency, quality, and fairness standards as human-written ones. The company also published a supporting research paper co-authored with academics from MIT and the University of Washington, outlining the approach's potential and risks. While the pilot begins with a small group, X says it plans to expand access gradually. The company hopes this experiment creates a feedback loop where AI models improve by learning from human judgment, without replacing it. If successful, this could mark a turning point in how generative AI collaborates with people to reduce online misinformation at scale.
[11]
Grok now writes Community Notes on X
According to ADWEEK, X is piloting a program allowing AI chatbots to generate Community Notes, a feature expanded under Elon Musk's ownership to add context to posts. This initiative, announced involves treating AI-generated notes identically to human-submitted notes, requiring them to pass the same vetting process for accuracy. Community Notes, originating from the Twitter era, enables users in a specific fact-checking program to contribute contextual comments to posts. These contributions undergo a consensus process among groups with historically divergent past ratings before becoming publicly visible. For example, a note might clarify that an AI-generated video lacks explicit disclosure of its synthetic origin or provide additional context to a misleading political post. The success of Community Notes on X has influenced other platforms, including Meta, TikTok, and YouTube, to explore similar community-sourced content moderation strategies. Meta notably discontinued its third-party fact-checking programs in favor of this model. The AI notes can be generated using X's proprietary Grok AI or through other AI tools integrated with X via an API. Despite the potential for efficiency, concerns exist regarding the reliability of AI in fact-checking due to the propensity of artificial intelligence models to "hallucinate," or generate information not grounded in reality. A paper published by researchers working on X Community Notes recommends a collaborative approach between humans and large language models (LLMs). Learn: Chatbot hallucinations This research suggests that human feedback can refine AI note generation through reinforcement learning, with human note raters serving as a final verification step before notes are published. The paper states, "The goal is not to create an AI assistant that tells users what to think, but to build an ecosystem that empowers humans to think more critically and understand the world better." It further emphasizes, "LLMs and humans can work together in a virtuous loop." Even with human oversight, the reliance on AI carries risks, particularly since users will have the option to embed third-party LLMs. An instance involving OpenAI's ChatGPT demonstrated issues with a model exhibiting overly sycophantic behavior. If an LLM prioritizes "helpfulness" over factual accuracy during a fact-check, the resulting AI-generated comments could be incorrect. Additionally, there is concern that the volume of AI-generated comments could overwhelm human raters, potentially diminishing their motivation to adequately perform their voluntary work. X plans to test these AI contributions for several weeks before a broader rollout, contingent on their successful performance during the pilot phase. Users should not anticipate immediate widespread availability of AI-generated Community Notes.
[12]
X Might Soon Use AI Agents to Fact-Check Posts and Write Community Notes
Humans will reportedly continue to write and vote on posts alongside AI X (formerly known as Twitter) is reportedly planning to use artificial intelligence (AI) to write Community Notes on the platform. As per the report, the company has asked developers to submit custom AI agents capable of verifying the authenticity of posts and providing neutral explanations about them. This will mark a big shift in the social media platform's fact-checking programme, which so far relied on human users to write and vote for Community Notes. X is reportedly opting for AI agents to increase the scale and speed of fact-checking. Keith Coleman, Vice President of Product at X, and the head of the Community Notes programme, told Bloomberg in an interview that the company was considering the usage of AI. For this, developers have reportedly been given the option to submit their own AI agents for review. The submitted AI agents are said to go through a run of writing practice notes, which will then be reviewed by the company. If the company finds the AI agents to be helpful, they will be deployed to write notes on public posts of X, the report added. Notably, Bloomberg highlights that only human users will be conducting the reviews of published notes, and a note will only appear if multiple people with different viewpoints find it to be helpful. Coleman reportedly stated that AI-written Community Notes could appear later this month. Coleman reportedly called the decision to onboard AI agents to write notes and let humans review the posts an "incredibly powerful" combination. He also hinted that once AI is involved, the number of Community Notes, which stands at hundreds per day, could significantly increase, the report said. Notably, in 2021, the platform started a fact-checking platform called Birdwatch, where some users were selected to be contributors to fact-check public posts and verify their authenticity. Users would also vote on published notes as helpful or unhelpful. The crowdsourced programme was later rebranded to Community Notes in November 2022 after Elon Musk took over X. Ever since its inception, the service has only been managed by human users who volunteer for it.
[13]
X tests AI bots for fact-checking Community Notes - The Economic Times
Community Notes, a user-driven system for adding context to posts, was originally introduced when the platform was known as Twitter. It has been expanded under Elon Musk's ownership. Per X, rating feedback can help develop AI agents that can deliver more accurate, less biased, and broadly helpful information.Social media platform X has rolled out a pilot programme that lets artificial intelligence (AI) bots contribute to its Community Notes feature, a user-driven system for adding context to posts. From X's Community Notes handle, the company said, "Starting today, the world can create AI Note Writers that can earn the ability to propose Community Notes. Their notes will show on X if found helpful by people from different perspectives -- just like all notes." It added, "Not only does this have the potential to accelerate the speed and scale of Community Notes, rating feedback from the community can help develop AI agents that deliver increasingly accurate, less biased, and broadly helpful information -- a powerful feedback loop." Community Notes, originally introduced when the platform was known as Twitter, has been expanded under Elon Musk's ownership. It allows users to fact-check posts by adding extra context. These notes are only published if other contributors from varied viewpoints rate them as helpful. For instance, a Community Note might clarify that a widely shared video was created using AI or flag misleading claims made by public figures. X's senior executive, Keith Coleman, told Bloomberg that although AI bots can write notes faster, people will still play a key role in the process. "They can help deliver a lot more notes faster with less work, but ultimately the decision on what's helpful enough to show still comes down to humans," he said. How the AI feature works: * Sign up today and begin developing your AI Note Writer. * Start writing notes in test mode. * We'll admit a first cohort of AI Note Writers later this month, which is when AI-written notes can start appearing.
[14]
X is All Set to Allow AI Chatbots to Write Community Notes
This pilot program aims to increase the number of community notes and improve the quality of context added to posts. Community Notes has remained a staple feature on X, which helps verify a post with the help of human contributors. But now, X is planning to give AI chatbots a shot at writing Community Notes, as part of a pilot program that started on Tuesday, July 1st. The reason behind this change seems to be that AI chatbots can scan through a large volume of posts shared on X, even those that don't get enough traction to catch the eye of human contributors and add relevant context where needed. "Our focus has always been on increasing the number of notes getting out there. And we think that AI could be a potentially good way to do this. Humans don't want to check every single post on X -- they tend to check the high-visibility stuff. But machines could potentially write notes on far more content," said Keith Coleman, X's VP of product and head of Community Notes, in a statement to ADWEEK. These fact-checking AI chatbots can be created using X's Grok AI or OpenAI's ChatGPT via an API. That said, given AI chatbots' tendency to hallucinate, these AI-generated community notes will be vetted by humans before broadly appearing on X. AI Community Notes will also be subject to X's scoring mechanism to reduce inaccuracies. Even Community Notes leaders agree on the use of AI, as mentioned in X's papers published on Monday. The paper discusses how AI can help scale Community Notes on the platform. With humans in the loop, it can retain the same trust among users as it currently holds. X's implementation of Community Notes is a popular method for fact-checking, to the extent that even Meta opted for this system over its third-party fact-checkers. But the inclusion of AI does raise a lot of eyebrows. Not just because it is prone to making up new realities, but that it can be manipulated as well. X's own Grok went on a frenzy, talking about white genocide in every interaction out of nowhere. And there is no saying that the same can't happen with Community Notes. In a world increasingly embracing AI slop, I am personally not a fan of this move. But who knows, maybe AI-written Community Notes might curb the spread of misinformation on X. The pilot program has already started, but it will be a while before you start seeing AI-written community notes on X. It also raises concerns about yet another area where humans could be replaced by their AI counterparts. But Coleman has assured that this is not the case. Both humans and AI systems will be crucial for Community Notes.
[15]
Community Notes to fact-check: X becoming a playground for AI?
Misinformation meets automation: X hands over context-building to AI in a bot-saturated digital battleground. X, formerly known as Twitter, has quietly begun one of its most significant experiments yet, using AI-generated Community Notes to fact-check viral posts. Community Notes was originally launched as Birdwatch in 2021. It allowed users across ideological lines to collaboratively add helpful context to misleading or incomplete tweets. The system became a flagship feature under Musk's ownership, symbolizing his commitment to "free speech through context, not censorship." But now, that context may no longer come from humans. This move marks a turning point from a decentralized system powered by transparency and collaboration to an increasingly dependent on machines that few users truly understand or can scrutinize. Once a proudly human, crowdsourced effort to combat misinformation is now being handed over, at least in part, to machines. Also read: Elon Musk's xAI acknowledges security breach in Grok bot: What happened and what's next Community Notes was introduced as a way to crowdsource truth. Users across ideologies could propose notes, and only when those notes were rated helpful by a politically diverse set of contributors would they be published. Now, X has rolled out a system where AI bots can automatically write suggested notes when users request context. These AI-written notes are reviewed and rated by human contributors before going live. However, the drafting process is no longer human-first. X claims this will allow it to scale fact-checking, generate context faster, and improve coverage on viral or misleading content. Bots never sleep and in theory, they can keep pace with the platform's nonstop flow of misinformation. Here's the problem: Community Notes was never meant to be fast. It was slow because it prioritized deliberation, cross-ideological input, and trust. People trusted it precisely because it wasn't algorithmic, it was written by real users with transparent motivations and visible consensus. By letting bots write the first draft, X is shifting from collaborative fact-checking to automated annotation, which raises serious concerns about nuance, accuracy, and bias. AI systems can repeat existing patterns, misinterpret context, or introduce subtle distortions. And with no public insight into how these AI bots are trained or evaluated, users are being asked to trust machines that they cannot see. Also read: Elon Musk vs Sam Altman: AI breakup that refuses to end Many users already rely on Grok to fact-check posts, summarize trending discussions, or add context that Community Notes hasn't caught up with. But Grok, like any large language model, is not immune to hallucinations, outdated info, or biased outputs, especially when it pulls from a firehose of unreliable user content. Unlike Community Notes, Grok's responses are not subject to peer review or transparency around source data. This means that X will have two different AI systems, Community Notes bots and Grok. One writes fact-checks while the other acts like a real-time arbiter of truth and neither is fully accountable. It's important to understand what environment these AI fact-checkers are operating in. Since Musk's takeover in late 2022, bot activity and hate speech on the platform have surged dramatically. A 2023 analysis by CHEQ found that invalid traffic (including bots and fake users) jumped from 6% to 14% post-acquisition. The same report noted a 177% increase in bot-driven interactions on paid links. Meanwhile, hate speech reports rose by over 300% in some categories, according to a 2023 Center for Countering Digital Hate study. That matters, because Community Notes was supposed to be the platform's solution to misinformation and manipulation, a user-powered defense. But now, it's being overwhelmed from both sides: malicious bots flooding timelines, and now "helpful" bots trying to clean it up. The most important part of any fact-check is judgment, something AI still lacks. Context, tone, satire, cultural nuance, and evolving political narratives are things machines struggle to interpret. An AI-written note might get the surface-level fact correct while missing the larger misdirection entirely. Even if human review remains in place, the very act of outsourcing the writing of notes to bots signals a shift away from collective human responsibility. And if these AI systems fail, the blame will be as diffuse and faceless as the code behind them. X has become a platform increasingly defined by automation. After cutting thousands of staff, including much of its trust and safety team, the company has leaned heavily on algorithms, for moderation, recommendation, and now, for truth itself. Community Notes was once a rare example of a social feature built on trust, slowness, and human credibility. By turning it into an AI tool, X is making the system faster, but possibly hollowing it out in the process. There's a reason fact-checking requires people. Facts are easy; context is hard. And when machines are allowed to define truth, especially in a digital ecosystem already flooded with bots and hate speech, the very idea of trustworthy context begins to erode. Faster is not always better. If truth is delegated to AI, and oversight is scaled down to match, what happens to public trust? X may soon find that in the rush to automate truth, it has automated away credibility.
Share
Copy Link
X, formerly Twitter, is launching a pilot program allowing AI chatbots to generate Community Notes, raising questions about the future of digital fact-checking and the role of artificial intelligence in content moderation.
X, the social media platform formerly known as Twitter, is piloting a groundbreaking program that allows AI chatbots to generate Community Notes, a feature designed to provide context and fact-checking for posts on the platform 1. This move marks a significant shift in the approach to content moderation and fact-checking on social media platforms.
Source: The Verge
The new system, dubbed "AI Note Writers," will enable developers to create AI bots that can submit Community Notes 2. These AI-generated notes will be subject to the same vetting process as human-contributed notes, requiring approval from users with diverse perspectives before being published 1. X plans to clearly label AI-written notes for transparency 2.
Initially, AI Note Writers will only be allowed to add notes to posts where users have specifically requested a Community Note 3. The AI bots must "earn the ability to write notes" and can gain or lose capabilities based on the helpfulness of their contributions 2.
Proponents of the system argue that AI Note Writers could significantly increase the speed and scale of Community Notes 4. Keith Coleman, an X executive, suggests that the combination of AI-generated notes and human oversight could be "incredibly powerful" 2.
However, the introduction of AI into fact-checking has raised concerns among experts and users:
AI Hallucinations: Critics point out that AI models are prone to hallucinations, potentially generating inaccurate or misleading information 13.
Overloading Human Moderators: There's worry that the influx of AI-generated notes could overwhelm human raters, potentially reducing the quality of moderation 1.
Bias and Accuracy: Questions remain about how to ensure AI-generated notes remain unbiased and accurate, especially when dealing with contentious topics 5.
Source: engadget
X's move comes at a time when engagement with human-generated Community Notes has reportedly declined 5. The platform hopes that introducing AI will reinvigorate the fact-checking system and address issues such as slow response times to misinformation 5.
This initiative also reflects a broader trend in social media moderation. Other platforms like Meta, TikTok, and YouTube have been inspired by X's Community Notes to pursue similar community-driven fact-checking initiatives 1.
Source: Digit
As X prepares to accept its first cohort of AI Note Writers later this month 2, the tech community watches closely. The success or failure of this program could have far-reaching implications for the future of content moderation and the role of AI in shaping online discourse.
While the potential for faster, more scalable fact-checking is appealing, the challenge lies in striking the right balance between AI efficiency and human oversight. As one research paper suggests, the goal should be to create "an ecosystem that empowers humans to think more critically and understand the world better" 1, rather than relying solely on AI to determine truth and context in the complex world of social media.
Microsoft has announced its second major round of layoffs in 2025, cutting 9,000 jobs across various divisions as it continues to invest heavily in artificial intelligence while streamlining operations.
13 Sources
Business and Economy
5 hrs ago
13 Sources
Business and Economy
5 hrs ago
Silicon Valley investor Vinod Khosla forecasts massive job automation and economic shifts due to AI advancements, predicting an era of abundance by 2040.
3 Sources
Technology
13 hrs ago
3 Sources
Technology
13 hrs ago
Nvidia surpasses Microsoft in market capitalization, reaching $3.86 trillion, as AI chip demand surges. Other tech giants also see significant growth, while Tesla faces challenges.
4 Sources
Business and Economy
13 hrs ago
4 Sources
Business and Economy
13 hrs ago
Bank of America reports that autonomous vehicles are experiencing their 'ChatGPT moment', with breakthroughs in AI and computing driving rapid commercial deployment. The market is estimated to reach $1.2 trillion by 2040, encompassing cars, trucks, and other sectors.
2 Sources
Technology
5 hrs ago
2 Sources
Technology
5 hrs ago
Taiwan Semiconductor Manufacturing Co. (TSMC) experiences significant stock growth, outperforming major market indexes, driven by its AI chip production dominance and strong financial performance. However, the company faces geopolitical and currency risks.
2 Sources
Technology
5 hrs ago
2 Sources
Technology
5 hrs ago