6 Sources
6 Sources
[1]
Government Wants AI Companies to Pay Rightsholders for Copyrighted Content
DPIIT's draft is open for public and stakeholder consultation for 30 days Department of Promotion of Industry and Internal Trade (DPIIT), under the Ministry of Commerce & Industry, has proposed new recommendations to tackle the copyright issues involving artificial intelligence (AI) models. The government committee suggests implementing a blanket licence for all AI companies that are developing in-house commercial models, such as Google, OpenAI, Anthropic, and others. This will require the companies to make a flat payment to rightsholders whenever the AI models are trained on their copyrighted content. DPIIT highlights that this measure will protect both the content creators as well as eliminate the legal ambiguities around fair use. DPIIT Proposes Blanket Licence for AI Companies A 125-page working paper, titled "One Nation One Licence One Payment: Balancing AI Innovation and Copyright," was published by DPIIT on Tuesday. The government body highlighted that this is just the Part 1 of the paper, which "examining the intersection of generative artificial intelligence (AI) and copyright law." The core recommendation in this paper is a mandatory blanket licence, meaning AI firms would not need individual agreements with every content owner, and a centralised payment mechanism to compensate rights-holders when their work is used. The eight-member committee convened by DPIIT rejects unfettered access to copyrighted data for AI training without compensation. It argues that allowing free, unrestricted use of copyrighted content would erode incentives for human creators, including authors, artists, journalists and other rightsholders, which could damage the creative ecosystem over time. Instead, the working paper recommends a hybrid licensing model in which AI developers get a blanket licence to use any lawfully accessed copyrighted works for training AI models, without negotiating individually with copyright owners. The paper also recommends creating a system for collection and disbursement of the royalties owed by the rightsholders. The royalties will be paid only when the AI models using the data for training are commercialised, rather than on every use of content. Additionally, a new centralised body, tentatively named Copyright Royalties Collective for AI Training (CRCAT), would collect royalties from AI companies and distribute them to creators, including those not currently part of formal collective-management organisations (CMOs), DPIIT suggests. The paper also mentions that royalty rates would be fixed by a government-appointed committee. Moreover, the paper proposes that royalties should be retroactive, meaning firms that have already used Indian copyrighted works to train models for commercial deployment would also be liable for payment under the new regime. The draft also rejects alternative approaches such as a "zero-price licence" or an "opt-out" text-and-data-mining exception (where content owners would need to individually opt out before their work couldn't be used). Committee members said both these approaches create unfair burdens on creators, especially those from smaller organisations or without resources to police AI datasets. DPIIT has opened the draft for public consultation, inviting feedback from stakeholders over the next 30 days.
[2]
DPIIT panel may release 2nd paper on copyrightability of AI generated content in two months
New Delhi: The DPIIT committee on the intersection of artificial intelligence and copyright is expected to release its second working paper, on copyrightability of AI-generated content, in about two months, a senior government official said on Thursday. The committee's first paper was released on December 8, in which it proposed to give a mandatory blanket licence to artificial intelligence developers for using all legally accessed copyright-protected works to train AI systems. However, the licence should be accompanied by a statutory remuneration right for the copyright holders, according to the committee's recommendation. The Department for Promotion of Industry and Internal Trade (DPIIT) has sought stakeholders' views on this paper. Recognising the growing need for deliberations on emerging issues pertaining to AI (artificial intelligence) systems and copyright, the DPIIT formed a committee on April 28, 2025. The eight-member panel was headed by additional secretary in the department, Himani Pande. It also consists of legal experts, representatives from industry, and academia. It was tasked to identify the issues raised by AI systems, examine the existing regulatory framework, assess its adequacy, and recommend changes if necessary, besides preparing a working paper for consultation with stakeholders. The second paper will be on "copyrightability of AI generated content, and its authorship. How transformative AI work is," Pande told reporters here. The Committee was tasked with evaluating whether the existing legal framework on copyright adequately addresses the issues raised by this new technology or amendments to the law are required, and to give its recommendations. It was also tasked with preparing a working paper outlining the committee's analysis and recommendations. According to the first paper, generative AI has immense potential to transform the world for the better, underscoring the need for a regulatory environment that supports its development. However, the processes by which the AI systems are trained, often using copyrighted materials without authorisation from copyright holders and the nature of the outputs that they generate, have sparked an important debate around copyright law. It said that the central challenge lies in how to protect the copyright in the underlying human-created works, without stifling technological advancement.
[3]
Govt committee proposes mandatory blanket licence for AI training
Under the proposed model, while creators lose the ability to opt out of AI training altogether, they receive a statutory right to remuneration through a new central royalty-collection body. A committee formed under the department for promotion of industry and internal trade (DPIIT) has proposed a mandatory blanket licence that would allow artificial intelligence (AI) developers to train their models on any copyrighted work they can access legally -- without seeking individual permission from creators. Under the proposed model, while creators lose the ability to opt out of AI training altogether, they receive a statutory right to remuneration through a new central royalty-collection body.
[4]
India Proposes CRCAT for AI Training Royalties
India's new copyright working paper proposes setting up a central organisation called the Copyright Royalties Collective for AI Training (CRCAT). This body would run licensing and royalty governance for AI developers. If implemented, CRCAT becomes the single gatekeeper that enforces rates, ensures compliance, and distributes payments to creators whose work trains generative AI models. For context, CRCAT sits at the centre of the Department for Promotion of Industry and Internal Trade (DPIIT) committee's proposed mandatory licensing model. Developers would gain automatic rights to use any lawfully accessed copyrighted works for training, while creators receive revenue-linked royalties. Notably, rights holders cannot opt out. The system aims to simplify licensing by preventing developers from negotiating with thousands of creators. However, the proposal concentrates operational power in a single organisation that does not yet exist and depends heavily on copyright societies and collective management organisations (CMOs) that vary widely across sectors. Additionally, the framework assumes new CMOs will emerge over time to cover categories that currently have no collective representation. Here's a closer look at how CRCAT is structured, who participates, and how it would work in practice. CRCAT would function as a nonprofit designated by the central government under the Copyright Act. Only one organisation can represent each class of copyrighted works. That representative must be either a registered copyright society under Section 33 of the Copyright Act or a newly formed nonprofit collective management organisation. Sectors without either structure would send government-nominated representatives to the CRCAT board until they form their own CMO. The governing board includes one representative from each member organisation and temporary representatives from unorganised sectors. The design aims to ensure that every creative category eventually holds a seat in royalty calculations. CRCAT handles four core functions: collecting royalties from AI developers, distributing funds to CMOs and societies, enforcing compliance, and operating the Works Database that determines payout eligibility. The framework positions CRCAT as the administrative funnel through which all money, data, and compliance flow. Developers never pay creators directly, and creators never negotiate with developers. A government-appointed Rate Setting Committee, not CRCAT, decides royalty rates. The committee includes senior officials, legal and financial experts, technical experts, one representative from CRCAT, and one from AI developers. It reviews rates every three years. The committee rejects granular valuation methods, such as per-use accounting, because AI developers cannot trace how individual works influence outputs. Instead, it proposes a flat percentage of global revenue earned from commercialising the AI system. Developers owe nothing during training. They begin paying only once the model starts making money. Furthermore, the obligation applies retroactively, which means any company that has already trained on copyrighted content owes royalties once this system takes effect. Developers must file a Training Data Disclosure Form summarising broad categories of content used during training. They must disclose the class and subcategory under Section 14, source, and general nature of content. They do not need to list specific works, datasets, or URLs. The committee argues that AI systems cannot reliably capture granular attribution and that detailed disclosures would expose proprietary processes. The disclosure therefore serves two narrow functions: verifying lawful access and helping CMOs divide royalty pools. This approach keeps compliance light but limits creators' visibility into how their work appears in the training pipeline. Creators must register their works in a sector-specific Works Database. Only registered works receive payment, even though unregistered works can still be used in training. Each CMO controls its own distribution policy. It may distribute royalties equally across all registrants or use a value-based method relying on licensing history, citations, audience metrics, or awards. This mirrors how societies handle unlogged royalties. The paper anticipates fraud, duplicate works, and incomplete metadata. It expects CMOs to adopt verification tools such as fingerprinting or watermarking. Additionally, CRCAT will hold royalties for unorganised sectors for three years. If no CMO forms within that period, those funds move into a welfare pool. The framework does not create a new tribunal. Instead, it strengthens mechanisms already present in India's copyright ecosystem and adds AI-specific triggers. The first layer sits within CRCAT. Each CMO or society must run a grievance cell to handle disputes about distribution, categorisation, ownership, or non-payment. Clear internal rules are meant to resolve most disputes early. The second layer activates when disagreements involve rate-setting or methodology. Courts can review royalty rates through judicial review. The third layer governs infringement and false-declaration disputes. If a creator challenges a developer's claim that training relied only on proprietary or licensed data, the burden shifts to the developer. Courts decide whether the claim stands. This logic mirrors standard essential patent litigation, where a party's willingness to license influences judicial outcomes. On penalties, the report keeps injunctions available. The committee notes that injunctions may become rare once a licensing mechanism exists but concludes that restricting remedies now would be premature. Courts will weigh injunctions case by case, including whether developers comply with registration, disclosure, and royalty rules. In practice, if a developer engages with the licensing system, courts may lean toward compensation rather than stopping a deployed model. CRCAT assumes a level of institutional readiness that many creative sectors currently lack. Journalism, regional publishing, stock images, memes, gaming, and social media creators do not have registered copyright societies under Section 33 or functioning collective management organisations. Under the proposed model, these sectors would have to create CMOs from scratch before they can become members of CRCAT, register works for royalty eligibility, or participate in governance. The revenue-based model also raises structural questions: Furthermore, the proposal creates a deeper policy tension. It aims to compensate creators yet removes their ability to refuse inclusion. The framework therefore treats access as the default and redefines consent as compensation, a shift that places AI development goals ahead of individual control.
[5]
DPIIT Rejects Fair Dealing Fix in Copyright Law for AI Training
Amid the ongoing debate on whether the "fair dealing" exception on copyright infringement can be applied to the process of training AI models, a government committee has said that amending existing law will not help adequately protect the rights of content creators, nor will it reduce the legal exposure of AI developers. "Amending the fair dealing provision under Section 52(1)(a) of the Copyright Act, 1957, will neither help strike a balance, nor will it effectively address the legal exposure of AI developers under copyright law," the committee under the Department for Promotion of Industry and Internal Trade (DPIIT) said. This comes at a time when the intersection of AI and copyright law has become a focal point of legal discourse. In the age of AI, one of the most pressing issues is whether AI-generated works can be protected under copyright law. Explaining the rationale behind its argument, the committee said that the fair dealing provision is primarily an "exception" to copyright infringement, in the nature of defence and not an enabling provision such as the exclusive rights provision of copyright owners under Section 14. "From a jurisprudential point of view, intellectual property rights work as a two-way sword. On the one hand, there is a growing awareness that such protection is a sine qua-non of the motivational factor underlying the creation of an intellectual work; however, on the other hand, granting an absolute protection to the intellectual work can be detrimental to the further progress of humanity," it argued. Furthermore, protection under the fair dealing provision is contingent upon meeting specific criteria. First, the burden of proof lies with the person or entity seeking protection under the copyright law. Secondly, assuming that the first requirement is met, the AI developer must also show that it only used the copyrighted material for specific purposes mentioned in the aforementioned clause, such as research, criticism or review, the reporting of current events and current affairs, among others. "As a consequence, any legislative amendment to the fair dealing provision to facilitate AI training will not reduce the legal uncertainty faced by AI developers," the committee said. A committee under the Department for Promotion of Industry and Internal Trade (DPIIT) released a part of a working paper on the interaction of AI and copyright on Monday. In its assessment, the committee found that content creators globally are demanding that the use of copyrighted material for training AI systems should be subject to "consent and compensation". Further, they argue that allowing AI training on copyrighted works "without permission or fair remuneration poses an existential threat" to the creative industries. On the other hand, the tech industry claims that training AI systems on copyrighted materials is "fair use" and should be exempted from copyright infringement, the committee noted. In light of the arguments from both sides, the committee has proposed a mandatory blanket licence that would allow AI developers to train their models on any copyrighted work that can be accessed legally. Under the proposed model, AI developers do not need to seek individual permission from content creators to train their models. However, content creators will receive a statutory right to remuneration through a new central royalty-collection body. This comes at a time when big tech firms such as Meta, Microsoft, Google, OpenAI and Anthropic are facing a growing number of lawsuits globally over the usage of copyrighted material to train their AI models without permission. While copyright owners have demanded compensation, companies have claimed fair use, sparking a high-stakes dispute over intellectual property rights. Earlier this month, OpenAI lost a court battle to keep chat logs secret in a copyright case after a US federal judge asked the company to hand over the records, ruling that it wouldn't violate users' privacy. In India, the ChatGPT maker is also facing a copyright lawsuit from a consortium of media outlets led by ANI. Prior to that, both Meta and Anthropic won their copyright lawsuits against authors, who alleged these companies infringed upon their rights by using their books without permission to train their AI systems. Legal experts that MediaNama spoke to earlier seemed to be divided over whether Anthropic's use of the licensed books would fall within the scope of fair dealing in India. Only time will tell whether India's newly proposed model will end the disagreement between tech companies and people who own copyrights. For now, it's clear: authors and creators will get paid when their work helps power these new AI systems. Also Read: Would An AI Model Training On Purchased Copyright Work Constitute 'Fair Dealing' In India? New York Times Sues Perplexity AI For Copyright Infringement And False Attribution
[6]
DPIIT Committee Proposes Hybrid AI Licensing In Working Paper
India has entered the global copyright and artificial intelligence (AI) debate with the release of a paper titled Working Paper on Generative AI and Copyright Part 1: One Nation One License One Payment, published by a committee formed by the Department for Promotion of Industry and Internal Trade (DPIIT). The report proposes a hybrid licensing framework that gives developers automatic access to all lawfully accessed copyrighted content for training. In return, creators receive statutory royalties routed through a single national royalty mechanism, framed in the paper as a 'One Nation One License One Payment' approach to AI training rights. For context, the proposal marks India's first formal attempt to resolve a fast-growing legal conflict between AI innovation and copyright protection. And it arrives at a moment when lawsuits internationally are challenging developers who train AI systems on copyrighted material without permission, with India also facing similar questions through ANI's lawsuit against OpenAI in the Delhi High Court (HC). Notably, the recommendation signals a deliberate policy shift. The government wants to prevent licensing barriers from slowing AI development, while also ensuring that creators gain financial benefit from commercial AI systems trained on their work. Moreover, by replacing fragmented licensing with a centralised payment and royalty distribution system, the framework attempts to standardise compensation across sectors. By choosing this path, India positions itself between the more permissive frameworks in Japan and Singapore, and the stricter compliance-focused rules emerging in the European Union (EU). Elsewhere, the Indian government is also building domestic AI capability as part of the IndiaAI Mission. At the centre of the proposal is a mandatory blanket licence. Once developers obtain lawful access to copyrighted material, they can use it for AI training without negotiating individual permissions or licensing contracts. In effect, creators cannot block the use of their works for model training. Instead, the system compensates them through statutory royalties. To implement this structure, the committee proposes establishing a centralised entity called the Copyright Royalties Collective for AI Training (CRCAT). This entity would collect payments from developers and distribute royalties to creators. Importantly, developers would contribute based on pre-defined royalty formulas and revenue thresholds. Furthermore, creators must register their works to receive payouts. Unregistered works can still be used for training, but creators would not be eligible to receive compensation. Notably, registration does not influence dataset access. The lawful access requirement forms the compliance boundary. Developers must purchase, license, subscribe to, or otherwise legally access content before using it. Notably, the committee explicitly separates access rights from copyright permission. Once lawful access exists, no additional approvals are required. Importantly, this proposal applies in a forward-looking manner. Therefore, ongoing scraping disputes and past unauthorised uses remain subject to current law and active litigation. The committee reviewed several regulatory approaches before proposing the hybrid structure. First, it rejected a blanket text and data mining (TDM) exception, as that model allows developers to use copyrighted content without compensating creators. According to the committee, such a system weakens creative incentives and increases the risk of AI outputs competing directly with original works. Next, it examined the European-style TDM exception with opt-out rights. However, the committee concluded that this model was not workable as rights holders cannot meaningfully enforce opt-outs unless developers disclose detailed training datasets. The committee also warned that mandatory dataset transparency could expose proprietary data and impose heavy compliance burdens, especially on smaller companies. The committee also dismissed voluntary licensing. In its view, negotiating with millions of rights holders is unworkable at scale. Similarly, it rejected extended collective licensing as India does not yet have a mature or unified licensing ecosystem. Many informal and community creators would remain outside such a system, creating structural inequity. Additionally, the committee evaluated traditional statutory licensing, which already exists in India's broadcasting framework. However, it determined that identifying and compensating millions of creators would be impractical without a centralised mechanism, and would likely recreate the same transaction-cost barriers that the policy aims to remove. By process of elimination, the committee has concluded that the hybrid model offers the most predictable structure for access, compensation, and long-term implementation. The Ministry of Electronics and Information Technology (MeitY) supports the hybrid model. According to the ministry, developers need broad and representative datasets to improve model performance and reduce bias. It also argues that creators should receive compensation as AI-generated outputs increasingly replicate identifiable artistic styles and creative signatures. To balance innovation with fairness, MeitY recommends safeguards such as triggering royalties only after a model or developer crosses a defined revenue threshold. Additionally, it expects CRCAT to maintain transparent reporting, predictable royalty formulas, and a structured process for dispute resolution. However, industry groups take a different position. For context, Nasscom opposes the hybrid model and argues that it adds administrative load that may slow innovation and disproportionately affect smaller companies. In its view, developers would need new systems to document lawful access, calculate royalties, and manage compliance, which could mean unnecessary cost and operational friction. Instead, Nasscom proposes a legal text and data mining exception for both commercial and non-commercial use. Under this approach, rights holders would use machine readable signals to opt out of public datasets, while contracts would govern training on private content. Nasscom argues that this model protects rights without creating a new licensing bureaucracy or compliance burden. This proposal marks the beginning of a broader reform of India's copyright framework in the context of AI. To explain, the next phase will address unresolved questions around authorship of AI-generated content, ownership, moral rights, and liability when outputs infringe copyright or cause harm. The government also plans to open the paper for public consultation before drafting amendments to the Copyright Act. Moreover, the hybrid model gives developers predictable rights to use training data while aiming to ensure that creators receive compensation. However, it also creates a new licensing authority and introduces compliance obligations that may increase operational complexity. Smaller companies, research institutions, and open-source communities may feel this impact first. Additionally, the formal dissent from industry signals that implementation may not be straightforward and could face strong resistance during consultation. If adopted, it may also shape how Indian developers access datasets and how financial value is distributed across the AI ecosystem. Finally, it will test whether India leans toward innovation speed, creator protection, or regulatory control as AI development scales. There are still questions in the air that need answering on AI and copyright. Some of them are as follows:
Share
Share
Copy Link
India's Department of Promotion of Industry and Internal Trade has released a 125-page working paper proposing a mandatory blanket licence for AI companies training models on copyrighted content. The framework would require firms like Google, OpenAI, and Anthropic to pay royalties through a new central body called CRCAT, eliminating individual negotiations with creators while ensuring compensation for rightsholders.
India's Department of Promotion of Industry and Internal Trade has proposed a mandatory blanket licence system that would fundamentally reshape how AI developers access and pay for copyrighted content used in AI model training. The 125-page working paper, titled "One Nation One Licence One Payment: Balancing AI Innovation and Copyright," was released on December 8 and is now open for stakeholder consultation for 30 days
1
. The eight-member committee convened by DPIIT rejects the notion of unfettered access to copyrighted data without compensation, arguing that allowing free use would erode incentives for content creators including authors, artists, and journalists1
.
Source: MediaNama
Under the proposed licensing model, AI developers would gain automatic rights to use any lawfully accessed copyrighted works for training their systems without negotiating individually with copyright owners
3
. This applies to major players like Google, OpenAI, and Anthropic, as well as any company developing commercial AI models. While content creators lose the ability to opt out of AI training altogether, they receive a statutory right to remuneration for copyright holders through a centralized payment mechanism3
. The committee emphasizes that royalty payments would only be triggered when AI models using the data are commercialized, rather than on every use of content1
.
Source: MediaNama
The working paper recommends creating a new centralized body called the Copyright Royalties Collective for AI Training (CRCAT) to collect royalties from AI companies and distribute them to creators
1
. CRCAT would function as a nonprofit designated by the central government under the Copyright Act, handling four core functions: collecting royalties from AI developers, distributing funds to Collective Management Organizations and copyright societies, enforcing compliance, and operating a Works Database that determines payout eligibility4
. The framework positions CRCAT as the single gatekeeper through which all money, data, and compliance flow, with developers never paying creators directly4
. A government-appointed Rate Setting Committee would decide royalty rates, reviewing them every three years, with the committee proposing a flat percentage of global revenue earned from commercializing the AI system4
.
Source: ET
Related Stories
The paper proposes that royalties should be retroactive, meaning firms that have already used Indian copyrighted works to train models for commercial deployment would also be liable for payment under the new regime
1
. DPIIT explicitly rejects amending the fair dealing provision under Section 52(1)(a) of the Copyright Act, 1957, stating it "will neither help strike a balance, nor will it effectively address the legal exposure of AI developers under copyright law"5
. The committee argues that fair use or fair dealing exceptions are primarily defenses against copyright infringement rather than enabling provisions, and any legislative amendment would not reduce legal uncertainty faced by AI developers5
. The draft also rejects alternative approaches such as a "zero-price licence" or an "opt-out" text-and-data-mining exception, with committee members saying both create unfair burdens on creators, especially those from smaller organizations1
.The DPIIT committee is expected to release its second working paper on copyrightability of AI generated content and authorship in about two months, according to additional secretary Himani Pande, who heads the eight-member panel
2
. The second paper will address "copyrightability of AI generated content, and its authorship. How transformative AI work is," Pande told reporters2
. This framework comes as big tech firms such as Meta, Microsoft, Google, OpenAI and Anthropic face a growing number of lawsuits globally over the usage of copyrighted material to train their AI models without permission5
. The committee's approach aims to protect both content creators and eliminate legal ambiguities while supporting technological advancement, with stakeholder consultation remaining open for the next 30 days1
.Summarized by
Navi
[4]
09 Dec 2025•Policy and Regulation

07 May 2025•Policy and Regulation

08 Oct 2025•Policy and Regulation

1
Science and Research

2
Policy and Regulation

3
Technology
