3 Sources
3 Sources
[1]
CCI study finds 67% of Indian AI startups focused on building AI applications
A recent survey by the Competition Commission of India (CCI) has found that nearly 67 per cent of Indian AI (Artificial intelligence) startups are primarily engaged in developing AI-based applications, with the majority relying on open-source technologies. New Delhi [India], October 8 (ANI): A recent survey by the Competition Commission of India (CCI) has found that nearly 67% of Indian AI (Artificial Intelligence) startups are primarily engaged in developing AI-based applications, with the majority relying on open-source technologies. The study highlighted how artificial intelligence is rapidly reshaping India's business landscape, driving efficiency and innovation across sectors. According to the findings, 76% of startups build their AI solutions using open-source platforms due to their accessibility and cost advantages. It stated "the core area of work for almost 67% of respondents in AI startups in India lies in the layer of building AI applications. The adoption of AI technologies in India is rapidly accelerating across various user sectors. The integration of AI across industries is reshaping market structures and business strategies". Around 88% of the respondents use machine learning (ML) as the foundation for their AI solutions, while 66% employ generative AI models such as large language models (LLMs). The use of natural language processing (NLP) stands at 78%, and 27% of startups are working in computer vision (CV). The CCI noted that global players such as Google, Microsoft, Meta, Amazon and OpenAI are major contributors to open-source technologies and algorithms used for model building. The adoption of AI is also spreading quickly across industries, transforming how companies operate. The study found that AI is being widely used in banking and financial services, healthcare, retail, e-commerce, logistics, and marketing for purposes such as dynamic pricing, personalised recommendations, demand forecasting, and automated decision-making. About 90% of the businesses surveyed said they use AI to monitor customer behaviour, while 27% use it to track supply chain efficiency. Nearly 69% use AI for demand forecasting, 24% for pricing trend analysis, and 21% for predicting inventory needs. The survey report also highlighted that AI helps businesses improve productivity, reduce costs, and gain a competitive edge by offering better and more personalised services. However, it also warned that companies failing to adopt AI risk losing competitiveness in an increasingly AI-driven market. Reacting to the findings of the study, Shivanghi Sukumar, Partner, Axiom 5 law chambers, said, "The CCI's study adopts a thoughtful and forward-looking approach that aligns with the objectives of the IndiaAI Mission. The study highlights areas that could shape competition dynamics in the future, including emerging risks of algorithmic collusion, opacity in AI decision-making, and unequal access to data and compute. By flagging these issues early, the CCI acknowledges that AI could gradually reshape market structures". To ensure fair competition and safeguard consumer interests, the CCI said it will continue to focus on promoting a culture of competition compliance and preventing AI-driven anti-competitive practices. The commission aims to support the development of a healthy and innovative AI ecosystem in India.
[2]
How India Should Regulate Competition in the AI Era
India's competition watchdog, CCI, has officially entered the AI debate with its Market Study on Artificial Intelligence and Competition (2025). As reported by Medianama, this study provides a "detailed assessment of how AI may reshape competition in India's digital markets". The study situates India's response within a global landscape where jurisdictions are diverging sharply in how they regulate AI's impact on competition. Some countries deal with the harm after it has occurred (ex-post), some take a more proactive approach (ex-ante), and others adopt a mix of the two (hybrid). An analysis of these different enforcement approaches reflects a trade-off between fostering innovation and preventing concentrated power. This approach uses existing competition and antitrust statutes to address anti-competitive harms after they occur. It stems from a common law tradition that prioritises market-led innovation and intervenes only after a clear violation. The United States exemplifies this approach by relying on antitrust laws such as the Sherman Act, the Clayton Act, and the Federal Trade Commission (FTC) Act. The Department of Justice (DoJ) and the Federal Trade Commission initiated investigations and litigation only in response to allegations of harm to competition or consumers. The landmark case against Meta (formerly Facebook) regarding its housing ad delivery system reveals how this approach works. Following investigations first reported by ProPublica as early as 2016, the U.S. DoJ filed a lawsuit in 2019 alleging that Meta violated the Fair Housing Act (FHA) of 1968 with its use of the "Special Ad Audience" tool. Years after journalists exposed bias in ad delivery, regulators used old statutes like the Fair Housing Act to secure compliance and penalties. It worked, but only after systemic discrimination had already distorted the market. This approach uses legislation designed specifically to address AI-related challenges. This legislation not only outlines how to deal with harm but also proactively sets rules for tech companies to follow. Given the speed and scale of AI adoption, this ex-ante approach details the 'rules of the game' in advance. The European Union best exemplifies this approach with its EU AI Act that establishes a risk-based hierarchy for AI systems. Companies must meet compliance and transparency obligations for high-risk applications before deploying them. The Digital Markets Act (DMA) complements this by designating certain tech firms as "gatekeepers". These are firms that then face stricter prohibitions and obligations. This is evident in the case of DMA's prohibition on self-preferencing which in turn regulates Google's search ranking AI. Similarly, several other major economies are adopting proactive, ex-ante strategies to govern AI and digital competition. China exemplifies a state-driven approach guided by its "Next Generation Artificial Intelligence Development Plan (2017)." Japan has enacted specific legislation, like the "Act on Promotion of Competition for Specified Smartphone Software," to ensure fairness in mobile ecosystems and prevent Google and Apple's monopoly. Similarly, Canada was considering enacting the "Artificial Intelligence and Data Act (AIDA)" to set rules for responsible AI development. However, this was halted in January 2025 with provinces like Ontario advancing their own AI regulations, such as Bill 194. The hybrid enforcement approach uses both reactive enforcement and proactive regulation to deal with AI-related harms. This involves enforcing existing regulations to ensure competition while also taking steps to build more specific legislation. The United Kingdom (UK) exemplifies such an approach. Apart from the existing laws, the UK has recently made attempts to enforce a new competition regime for digital markets. This includes the Digital Markets, Competition and Consumers (DMCC) Act, which regulates 'Strategic Market Status' (SMS) firms. This status is designated based on factors such as the "firm's market power in the UK, revenue, strategic significance", etc. Google has recently been designated as an 'SMS'; this means the UK could now "change how Google runs its search services in the country." Thus, the UK can act before anti-competitive harm materialises while still retaining powers to investigate and penalise if violations occur. Similarly, Australia's regulatory approach is moving beyond ex-post frameworks such as the Competition and Consumer Act (CCA) to initiatives like the Ethical AI Framework (2019) and the News Media and Digital Platforms Mandatory Bargaining Code (2021). The Bargaining Code, for example, compels digital platforms such as Google and Facebook to negotiate deals with news publishers. It is also mandatory for platforms to provide information in advance on any changes made to their algorithmic ranking and presentation of news. If negotiations fail, the process moves to a binding "compulsory arbitration". This global divergence in regulating AI is important for India to consider in shaping its own approach to dealing with anti-competition charges against big tech firms. The CCI study makes it clear that India cannot rely solely on ex-post enforcement. The structural risks of AI markets tilting towards market concentration and exclusive access to data and compute matter require ex-ante rules. Algorithmic collusion further complicates this. India's regulatory direction seems to be leaning toward a hybrid framework, especially given the deliberations on the Digital Competition Bill (2024), MeitY's AI governance guidelines (2025) and the discussions surrounding this report itself. But laws alone might not be enough to fix the asymmetry in the market. India's smartest move might be to adopt a hybrid approach where ex-ante rules are put in place for dominant players, and ex-post action continues to be enabled to keep markets open while innovation thrives.
[3]
How Startups, Platforms Want the CCI to Regulate AI Economy
The Competition Commission of India (CCI) has gathered perspectives from startups, user industries, AI platforms, and legal experts in its Market Study on Artificial Intelligence and Competition (2025). Most stakeholders urged the regulator to create a framework that encourages innovation while addressing market concentration, algorithmic bias, and data control by large technology firms. The study shows that India's AI sector is expanding faster than its regulatory capacity. Startups want easier access to data and less dependence on dominant players. Platform companies want proportionate, risk-based regulation, while legal experts want enforceable rules and incentives for responsible innovation. Together, their views reflect an AI economy that is struggling to balance competition, innovation and consumer protection. What Startups Told CCI The Need for Data Access and Fair Competition AI startups told the CCI that India needs "a balanced regulatory approach" that supports innovation while ensuring fair competition. They raised concerns that large technology firms control access to essential inputs such as data, compute infrastructure, and skilled talent. According to the report, "data accessibility emerged as a critical issue", and respondents suggested that the government "make anonymised data publicly available and introduce an AI governance policy in India, learning from global best practices". Moreover, several startups said government-created data pools could democratise access and reduce reliance on global technology providers. They argued that shared, anonymised datasets would allow smaller firms to train AI models responsibly while protecting user privacy. The report also notes that many startups depend on upstream vendors such as AWS and Microsoft Azure for cloud and compute services. These vendors classify partners as "Gold Vendors" or "Platinum Vendors" based on credit and sales volume. Respondents said such partnerships "may raise important considerations for competition policy, particularly regarding barriers to entry, long-term exclusivity, and concentration of innovation capabilities among a few dominant players". Regulatory and Compliance Challenges Startups said these arrangements often lock smaller companies into a particular ecosystem and make switching providers costly. They added that long-term credit-linked contracts can make startups financially dependent on large vendors, limiting their ability to negotiate fair terms. Compliance requirements for serving global clients were another major concern. Startups said they must often adopt the legal frameworks of their clients' jurisdictions or set up subsidiaries abroad to meet regional governance rules. This raises operational costs and adds complexity. To address these issues, startups proposed the creation of an Indian AI regulator that defines "performance benchmarks, third-party validation protocols, and regulatory approvals for AI-based products". The report notes that such a body would "ensure legal credibility and protect against liability risks from AI misperformance". Furthermore, startups said India should align its data protection framework with global standards and promote privacy-preserving AI techniques. They also cited ISO 42001 certification for AI management systems as an emerging best practice that could guide domestic governance models. What AI Platforms Said AI platform companies told the CCI that India should adopt "a risk-based and proportionate regulatory framework" that focuses on specific AI applications instead of imposing broad, one-size-fits-all restrictions. They said regulation must prevent concentration of power without stifling startups and smaller innovators. The report records that most cloud and model-access agreements in India are non-exclusive, allowing customers to switch providers. Respondents said such flexibility demonstrates "the competitive options available in the market". However, they acknowledged that "exclusive cloud supply or distribution agreements that create dependence on a particular platform" can lead to concentration of market power. AI platforms also discussed the role of mergers and minority investments. They said small, non-controlling stakes generally support innovation by bringing capital and expertise, while large financial or restrictive stakes could suppress competition. The study notes that "minority investments with investor protection rights are generally pro-competitive" when structured to preserve independence in input procurement, technology development, and commercialisation. At the same time, the CCI warns that "investments that appear to be minority stakes but confer substantial influence" may still require scrutiny under merger control provisions. The report highlights that "exclusive rights to an AI developer's intellectual property" or long-term cloud commitments could act as indicators of concentration. Moreover, platform firms said India's existing competition framework is "well equipped to balance the need for competition with the need to encourage innovation" through careful case-by-case analysis. They added that the key challenge now is for regulators to build technical expertise to understand how algorithms behave, interact, and influence market outcomes. What User Industries Told CCI User industries such as digital marketing, e-commerce, logistics, and retail said AI adoption has increased efficiency, improved product quality, and enabled better customer targeting. However, they cautioned that these gains come with higher dependence on large vendors controlling infrastructure and analytics systems. Industry participants told the CCI that "collaborations between large organisations should be monitored to prevent anti-competitive practices". They noted that vendor agreements can be horizontal, vertical, or complementary, with credit contracts often following standard terms while customer agreements are more customised. Such arrangements, they said, can create long-term dependencies that restrict smaller firms' ability to compete. To improve access for startups, respondents suggested that the government "create sector-representative data pools" that reflect industry-specific needs. These data pools would allow new entrants to train AI systems effectively without relying on dominant platforms. Additionally, sustainability emerged as a recurring concern. Participants called for responsible AI adoption that reduces environmental impact while improving consumer outcomes. They urged the government to promote AI applications that deliver "better product quality, consumer benefits, and reduced carbon footprints". What Legal Experts Told CCI Calls for a Strong Legal Framework Legal experts told the CCI that India urgently needs "a comprehensive and enforceable legal framework" for AI that ensures responsible development and safeguards both competition and consumer welfare. They said the country's policy direction must provide incentives for startups while maintaining strict oversight of dominant platforms. Experts cited international examples as potential models. The report says "the introduction of regulatory sandboxes by the European Union (EU) and the UK was seen as a step forward", referring to initiatives like the Lawtech UK Sandbox, which allows startups to collaborate directly with regulators, reduce compliance costs, and test products before enforcement. Respondents also mentioned that France and Germany "provided regulatory exemptions to homegrown AI developers, such as Mistral AI, to level the playing field". Data, Synthetic Data, and Startup Support Legal practitioners also raised several risks that require immediate attention. They said AI can enable "companies to access personal information and engage in discriminatory practices, limiting consumer choices". They warned that "concentration of data within a few large firms limits the access of smaller developers to valuable resources". To counter this, they suggested mandating the disclosure of public and anonymised data to "democratise access and level the playing field". On synthetic data, experts said the government should regulate its generation "to avoid fake data and content", while encouraging big technology firms to "generate more synthetic data in the public domain so that startups have a fair chance to compete". They emphasised that India must balance data protection laws with mechanisms for "accessible and legally compliant data sharing while maintaining confidentiality and accountability". Additionally, experts called for an AI startup policy with targeted financial support and innovation incentives. They said such a policy could create a more equitable ecosystem and prevent excessive dependence on foreign capital or technology. The report concludes that respondents consistently supported "balanced regulations that foster growth and innovation in the AI ecosystem while addressing ethical, competitive and consumer protection concerns". Why This Matters The CCI's findings come at a time when India's digital regulation remains uncertain. The government has paused the ex-ante provisions of the Digital Competition Bill, saying it wants to protect startups and small enterprises. However, several Indian companies, such as Bharat Matrimony, TrulyMadly, and MagicBricks, have warned that the delay helps dominant global platforms and leaves smaller firms without safeguards. Their accounts show the same risks the CCI identifies in its AI study. High commissions, restrictive billing systems, and arbitrary app removals demonstrate how concentrated platform power can limit competition and slow innovation. These examples mirror the regulator's warning that opaque algorithms and control over key digital infrastructure can distort markets even without direct coordination. Both the Digital Competition Bill debate and the AI market study reveal how India's regulatory approach still depends on reacting after harm occurs. The challenge is that technologies shaping markets evolve far more quickly than enforcement does. Without timely action, dominance can deepen and accountability can weaken. In the end, the CCI's study is not just about algorithms or pricing systems. It raises a broader question about whether India can regulate fast enough to preserve fair competition before concentrated power becomes impossible to undo.
Share
Share
Copy Link
A comprehensive look at India's AI ecosystem, focusing on startup trends, regulatory challenges, and competition dynamics. The story explores the CCI's recent study on AI and its implications for the country's digital future.
A recent study by the Competition Commission of India (CCI) has shed light on the burgeoning AI startup ecosystem in India. The survey reveals that 67% of Indian AI startups are primarily focused on developing AI-based applications, with a significant 76% leveraging open-source technologies due to their accessibility and cost advantages
1
.
Source: MediaNama
The study highlights the diverse technological landscape within these startups:
This vibrant ecosystem is rapidly reshaping India's business landscape, driving efficiency and innovation across various sectors.
The CCI study underscores the widespread adoption of AI across industries, transforming business operations and strategies. Key findings include:
Sectors such as banking, healthcare, retail, e-commerce, and logistics are at the forefront of AI integration, using it for dynamic pricing, personalized recommendations, and automated decision-making
1
.
Source: MediaNama
As India's AI sector expands faster than its regulatory capacity, the CCI is grappling with how to effectively regulate the AI economy. The commission is considering various global approaches to strike a balance between fostering innovation and preventing market concentration
2
.Three main regulatory approaches are being considered:
India appears to be leaning towards a hybrid approach, similar to the UK and Australia, which allows for both preventive measures and the ability to penalize violations when they occur.
Related Stories
The CCI has gathered insights from various stakeholders, including startups, user industries, AI platforms, and legal experts
3
. Key recommendations include:As India navigates the complex landscape of AI regulation, the CCI aims to support the development of a healthy and innovative AI ecosystem while ensuring fair competition and safeguarding consumer interests. The commission's focus on promoting competition compliance and preventing AI-driven anti-competitive practices will be crucial in shaping the future of India's digital economy.
Summarized by
Navi
[2]
13 Sept 2024

17 Mar 2025•Policy and Regulation

29 Nov 2024•Policy and Regulation
