2 Sources
[1]
Fix AI's data theft problem with onchain attribution
Opinion by: Ram Kumar, core contributor at OpenLedger The public has knowingly contributed to the rise of artificial intelligence, often without realizing it. As AI models are projected to generate trillions of dollars in value, it's time to start treating data like labor and building onchain attribution systems to pay the ones making it possible. X posts by users helped train ChatGPT, and their blog posts and forum replies shaped models that are now monetized by some of the most powerful companies in the world. While those companies are reaping billions, the end-users get nothing. Not a check, a credit or even a thank you. This is what invisible labor looks like in the 21st century. Billions of people have become the unpaid workforce behind the AI revolution. The data they generate, from words, code, faces and movement, is scraped, cleaned and used to teach machines how to sound more human, sell more ads and close more trades. And yet, in the economic loop that powers AI, the humans who make it all possible have been cut out entirely. This story is not new. The same model built empires on the backs of uncredited creative labor. Only now, the scale is planetary. This isn't just about fairness but about power and whether we want a future where intelligence is owned by three corporations or shared by all of us. The only way to redefine the economics of intelligence is through Payable AI. Instead of black-box models trained in secret, Payable AI proposes a future where AI is built openly, with every contributor traceable and every use compensated. Every post, video or image used to train a model should carry a tag or a digital receipt. Every time that model is used, a small payment should be sent to the data's original creator. That's attribution, baked into the system. This has precedent. Musicians now earn royalties when their tracks stream, and developers get credited when their open-source code is reused. AI should follow the same rules. Just because training data is digital doesn't mean it's free. If anything, it's the most valuable commodity we have left. The problem is that we've been treating AI like traditional software -- something you build once and sell a million times. That metaphor, however, falls apart fast. AI isn't static. It learns, decays and improves with every interaction, weakening when data dries up. In this way, AI is more like a living ecosystem feeding on a continuous supply of human input, from language and behavior to creativity. Yet there's no system to account for that supply chain and no mechanism to reward those who nourish it. Related: AI race between US and China resembles Cold War -- Marc Andreessen Payable AI creates a circular economy of knowledge -- an economic structure where participation equals ownership and where every interaction has traceable value. Autonomous AI agents will be everywhere: booking services, negotiating contracts and running businesses in a few years from now. These agents will be transacting, and they'll need wallets. They will also need access to fine-tuned models and must pay for data sets, APIs and human guidance. We are headed toward machine-to-machine commerce, and the infrastructure isn't ready. The world needs a system to track what an agent used, where that intelligence came from, and who deserves to be paid. Without it, the entire AI ecosystem becomes a black market of stolen insights and untraceable decisions. Today's complicated problems with AI pale compared to autonomous agents acting on people's behalf, with no way to audit where their "intelligence" came from. The deeper issue, though, is control. Companies like OpenAI, Meta and Google are building models that will power everything from education to defense to economic forecasting. Increasingly, they own the terrain. And governments -- whether in Washington, Brussels or Beijing -- are rushing to catch up. XAI is being integrated into Telegram, and messaging, identity and crypto are increasingly merging. We have a choice. We can continue down this consolidation path, where intelligence is shaped and governed by a handful of platforms. Or we can build something more equitable: an open system where models are transparent, attribution is automatic and value flows back to the people who made it possible. That will require more than new terms of service. It will demand new rights, like the right to attribution, the right to compensation and the right to audit the systems built on our data. It will require new infrastructure -- wallets, identity layers and permission systems -- that treat data not as exhaust but as labor. It will also demand a legal framework that recognizes what's happening: People are building value, which deserves recognition. Right now, the world is working for free. But not for long. Because once people understand what they've given, they'll ask what they're owed. The question is: Will we have a system ready to pay them? We're risking a future where the most powerful force on Earth -- intelligence itself -- is privatized, unaccountable and entirely beyond our reach. We can build something better. First, we have to admit the current system is broken. Opinion by: Ram Kumar, core contributor at OpenLedger. This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author's alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.
[2]
Can We Prevent AI From Becoming Another Web2-style Monopoly?
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter For all its promises, AI today is facing a deepening crisis of trust. The tools we now use daily, from ChatGPT and Midjourney to AI-powered medical assistants and financial copilots, are trained on data collected largely without consent, built behind closed doors, and governed by entities with little transparency and no obligation to share rewards. Trust in these systems is rapidly eroding, and rightly so. According to Edelman's 2024 Trust Barometer, global trust in AI companies has fallen to 53%, down from 61% just five years ago. In the U.S., it's plummeted to 35%. A June Reuters survey showed that over half of U.S. respondents don't trust AI-generated news. And yet, the technology barrels forward, embedding itself into everything from legal advice and education to content moderation and healthcare. The problem isn't AI itself, but the extractive system behind it. The real problem At the heart of the AI trust problem is a missing economic layer that tracks who contributes to these models, who benefits, and how decisions are made. Traditional AI companies operate on opaque pipelines. Public data is ingested silently, human labor is hidden, and model outcomes are treated as black-box results. Billions of dollars are made off the backs of contributors who never see a cent. We see the consequences with lawsuits against OpenAI and Google for unauthorized training data; the New York Times' legal battle for scraping content; growing concerns about misinformation and bias; and the unchecked centralization of power in a few AI labs. Blockchain can do what AI alone can't AI's rapid ascent has brought immense capabilities, but also glaring gaps in how value is attributed, how decisions are audited, and how contributors are rewarded. This is where blockchain can act as a true corrective force, with an idea of Payable AI, a new framework that embeds attribution, accountability, and rewards directly into the AI development lifecycle. Whether someone is labeling data, or fine-tuning outputs, they can be recognized and compensated transparently through smart contracts and on-chain proofs. A key innovation enabling this is Proof of Attribution, a method that verifiably traces each step in a model's evolution back to its source. Every dataset, every tweak, every contribution becomes part of a transparent, auditable ledger. Think of it as an open-source version of Scale AI, except instead of fueling proprietary systems for Big Tech, it unlocks public data pipelines where value flows to the many, not the few. Why now As AI agents grow more autonomous, embed themselves in consumer apps like Telegram (via xAI and TON), and start generating revenue, we must ask: who gets paid? Currently, there's no equivalent of AWS for data. No SaaS model that enables individuals to upload, attribute, and monetize their contributions to AI. That's the missing layer. And in this moment of rising economic pressure and growing distrust in tech, the need for infrastructure that bakes in fairness is urgent. The web3 alternative to closed AI Companies like Meta pour billions into centralized pipelines, controlling everything from raw data to model deployment.Meta's $15B isn't just a bet on labeling, but also on controlling the entire AI value chain. The future demands systems where value flows back to those who create it. If we don't act now, AI will follow the same trajectory as Web2 with a handful of giants extracting disproportionate value while everyone else watches from the sidelines. AI is no longer experimental, it's powering the systems we rely on daily. And yet, the foundational layers remain closed, biased, and opaque. Blockchain can fix that, introducing verifiability, traceability and built-in economic fairness AI systems inherently lack. The next phase of AI will be about credibility, and that starts by building trust into the infrastructure of intelligence itself. Market News and Data brought to you by Benzinga APIs
Share
Copy Link
As AI technology advances, concerns about data attribution, fairness, and monopolization grow. Blockchain-based solutions like Payable AI are proposed to create a more equitable and transparent AI ecosystem.
As artificial intelligence (AI) continues to advance at a rapid pace, concerns about data attribution, fairness, and potential monopolization are becoming increasingly prominent. Two recent opinion pieces highlight the need for a more equitable and transparent AI ecosystem, proposing blockchain-based solutions like Payable AI to address these issues 12.
AI models, such as ChatGPT, are being trained on vast amounts of data, often collected without explicit consent from users. This has led to a situation where major tech companies are reaping billions in profits while the individuals who unknowingly contributed to these models receive no compensation or recognition 1.
Ram Kumar, a core contributor at OpenLedger, argues that this scenario represents "invisible labor" on a global scale, with billions of people becoming an unpaid workforce behind the AI revolution 1. The lack of attribution and compensation for data contributors is not only a matter of fairness but also raises questions about power dynamics in the AI industry.
The opaque nature of AI development has resulted in a significant erosion of trust. According to Edelman's 2024 Trust Barometer, global trust in AI companies has fallen to 53%, down from 61% five years ago. In the United States, this figure has plummeted to 35% 2. This decline in trust comes at a time when AI is becoming increasingly integrated into critical sectors such as healthcare, education, and finance.
To address these concerns, both opinion pieces advocate for the implementation of Payable AI, a framework that would embed attribution, accountability, and rewards directly into the AI development lifecycle 12. This system would utilize blockchain technology to create a transparent and auditable ledger of contributions to AI models.
Key features of the proposed Payable AI system include:
Blockchain technology is presented as a crucial component in creating a more equitable AI ecosystem. It can provide the missing economic layer that tracks who contributes to AI models, who benefits, and how decisions are made 2. This transparency could help prevent the centralization of power in a few AI labs and ensure that value flows back to those who create it.
As AI agents become more autonomous and start generating revenue, the need for fair attribution and compensation becomes increasingly urgent. Without a system like Payable AI, there is a risk that AI development will follow the same trajectory as Web2, with a handful of giants extracting disproportionate value 2.
The implementation of such a system would require new rights, infrastructure, and legal frameworks. These would include the right to attribution, compensation, and the ability to audit systems built on personal data 1.
As the AI industry continues to evolve, the call for more transparent and equitable systems grows louder. The proposed Payable AI framework, leveraging blockchain technology, offers a potential solution to address the current shortcomings in AI development and ensure a fairer distribution of benefits in the AI-driven future.
Summarized by
Navi
[1]
Google's AI-generated summaries in search results have sparked an EU antitrust complaint from independent publishers, citing harm to traffic, readership, and revenue.
5 Sources
Policy and Regulation
11 hrs ago
5 Sources
Policy and Regulation
11 hrs ago
An Xbox executive's suggestion to use AI tools for emotional support and career guidance following Microsoft's layoffs has sparked controversy and criticism within the gaming industry.
5 Sources
Technology
11 hrs ago
5 Sources
Technology
11 hrs ago
Billionaire Mark Cuban forecasts that AI's untapped potential could lead to unprecedented wealth creation, possibly producing the world's first trillionaire from an unexpected source.
2 Sources
Technology
11 hrs ago
2 Sources
Technology
11 hrs ago
Meta's aggressive AI talent recruitment efforts, including reports of massive bonuses, have been called into question by a former OpenAI researcher who joined the company.
2 Sources
Business and Economy
11 hrs ago
2 Sources
Business and Economy
11 hrs ago
The US plans to restrict AI chip exports to Malaysia and Thailand to prevent China from accessing advanced processors through intermediaries, as part of its "AI Diffusion" policy.
2 Sources
Policy and Regulation
3 hrs ago
2 Sources
Policy and Regulation
3 hrs ago