2 Sources
[1]
Enough with mythologies: India needs new state-sponsored AI entities to create essential strategic capability - The Economic Times
A new AI model called Mythos can find software flaws. Its restricted release raises concerns about misuse by hackers or foreign governments. India, with its large pool of engineers and young people, needs to develop its own advanced AI. This requires state-led initiatives to foster homegrown AI capabilities and ensure strategic autonomy.Anthropic, the US AI giant that seeks to differentiate itself from other AI firms as being the champion of ethics and responsibility, has restricted initial access to its new powerful AI model-cum-agent, Mythos, to 40 organisations. All of them, except British AI Security Institute, are American. Reason for the restricted access? Mythos is so powerful that it can detect hidden vulnerabilities in software systems. If hackers get their hands on it, they could wreak havoc. The optimistic reading of Anthropic's restricted release is that those who got their hands on Mythos, like Google, Microsoft, Amazon Web Services, Apple, some chip and network equipment-makers, cybersecurity companies and finance companies, would deploy Mythos to spot bugs and fix them, including for the benefit of those without direct access. The dystopian reading of Anthropic has two strands. One, malign actors would get their hands on this powerful tool and use it to rob, blackmail and victimise others. Two, the US government could use its current status - equivalent to the one it had from immediately after the atomic bombings of Hiroshima and Nagasaki in August 1945, till when the Soviet Union conducted its first nuclear test in August 1949 - to lord it over all other nations, as the sole owner of a WMD. Tomorrow, or the week after, a Chinese AI company may come up with another super-powerful AI model-cum-agent that can work the same miracle that Mythos now does. How should India respond to the arrival of such powerful tools/weapons in the hands of foreign private companies, and potentially in the hands of foreign governments? It can seek regulatory solutions, equitable access for all nations and peoples to the fruits of advances in S&T. If India were a Pacific or Caribbean island nation, this would be fine, and it could concentrate on developing economic applications for AI models developed by others. But India is different. It's an aspiring large power with strategic autonomy. India produces about 1.5 mn engineers a year, second only to China's 2 mn, which include about half-a-million postgraduates and PhDs. At 245 mn, India has the world's largest cohort of people aged 20-30. The comparable count for China is 175 mn, and for the US, 45 mn. India's answer to Claude Mythos is one or more homegrown AI models as powerful as, if not more powerful than, Mythos. The threat of mutually assured destruction (MAD) kept nuclear powers frantically trying to increase their power differential over rivals, but trapped in the non-offensive logic of deterrence, leading to detente, capping of missiles and arms-reduction deals. Is this view paranoid? After all, Anthropic is a private company, not an arm of the US government, and a company that forswears the use of its products for harmful activities like mass surveillance and automated selection of targets in combat. Nvidia is a private company. But the US government decides whether Chinese entities can buy the company's most advanced chips. In China, private company assets would become national assets with fewer quibbles. If India wants to retain its strategic autonomy, it must develop its own frontier AI models. Private companies here lack the vision or ambition. GoI will have to take the lead. As it had when Nehru set up atomic and space programmes, and Rajiv Gandhi set up C-DoT and C-Dac for advances in telecom and computing. C-Dac delivered on its mandate to produce a supercomputer within 3 yrs and with a budget of ₹30 cr, producing PARAM 8000, a parallel processing behemoth that ranked second among the world's supercomputers in 1991. A lot of India's youthful energy is frittered away in the pursuit of ritual and cant, sectarian hatred and politics of schism to advance an ideology of regression into a golden ideal of the past - as if glory is to be found in myths and not in a Mythos. The bulk of top talent emigrates. Or strives in startups that make a quick buck, rather than create technological breakthroughs. Some work at the so-called 'global capability centres' (GCCs) developing IP for foreign companies. Give these young people a chance to create cutting-edge tech for their country, under visionary leaders like Homi Bhabha, Vikram Sarabhai, Sam Pitroda, Vijay Bhatkar or Verghese Kurien, they will deliver. What about the advanced computing hardware needed for AI? DeepSeek, the Chinese AI company that thumbed its nose at pricey rivals from the US, has just announced its latest offering, developed on chips designed and manufactured by Huawei. Should India wait for counterpart private Indian companies to do their job? Sure, if our goal is to let students appreciate the finer nuances of the theatre of the absurd. Not if we want to acquire computing power that can't be sanctioned by a whimsical leader who decides, one fine summer morning, that he wishes to end a civilisation. India needs new state-sponsored AI entities to create essential strategic capability. GoI could fund startups, set up public enterprises, offer a grand prize, coordinate research at universities and IITs, offer special incentives to state governments to harness their domestic talent. These are not mutually exclusive options. State action, at speed, is of the essence. That way, India won't be held hostage by Mythos, or others of its ilk.
[2]
India Seeks Access to Claude Mythos Amid Cybersecurity Fears
The Centre is in talks with the US administration and Anthropic to secure access to Claude Mythos for Indian companies under Project Glasswing, with no Indian firm among the roughly 40 organisations currently in the programme, according to a report by the Inc42. Ministry of Electronics and Information Technology (MeitY) Secretary S. Krishnan confirmed on April 28 that the government is working out the "logistics" with US authorities to include Indian entities. What Nasscom said: Nasscom wrote to Anthropic separately, arguing Indian firms maintain critical code used by organisations worldwide. "As AI systems evolve to autonomously identify and chain vulnerabilities across platforms, the potential for cascading, cross-border risks becomes significantly higher. Given this reality, it is imperative that Indian technology firms are included in the global industry consortium by Anthropic," Nasscom said in a letter to Anthropic, according to ET. What the government said: A senior government official told ET that rival companies could release similar models without warning. "Currently, Anthropic has held off the wider release, but tomorrow more companies can launch such models. They may release them without advance notice. The government needs to build its capacity as of yesterday." OpenAI has since launched GPT-5.4-Cyber with tiered access under its Trusted Access for Cyber programme, demonstrating exactly that risk. The Centre is also weighing a broader policy framework for future high-capability model launches, and is reluctant to enable access for some companies while excluding others. The restricted launch has not been without incident. Bloomberg reported that a small group of unauthorised users gained access to Mythos through a third-party vendor environment on the day of launch, fuelling concern among governments worldwide that critical systems may already be at risk. What the government has done: Finance Minister Nirmala Sitharaman and IT Minister Ashwini Vaishnaw chaired a meeting on April 23 with senior officials from the Reserve Bank of India (RBI), National Payments Corporation of India (NPCI), MeitY, the Department of Financial Services, and the Indian Banks' Association (IBA). Sitharaman called the threat "unprecedented." The meeting directed: The RBI has separately consulted the US Federal Reserve and the Bank of England on the risks. The access conflict: Even if India secures Glasswing access, NPCI faces a direct compliance problem: India's 2018 data localisation rules require payment system providers to store all transaction data on servers within India, while Mythos runs on strictly controlled US-based servers. NPCI has not publicly resolved this. As MediaNama reported, Airtel and Vodafone Idea qualify as data fiduciaries under the Digital Personal Data Protection Act, 2023 (DPDPA), meaning a successful AI-driven breach of their vendor systems would require mandatory user notification and a detailed report to the Data Protection Board within 72 hours, with fines up to Rs 200 crore for non-compliance. What Glasswing is: Anthropic launched Project Glasswing on April 7 alongside Mythos, restricting access given the model's ability to autonomously find and exploit software vulnerabilities across major operating systems and browsers, including flaws hidden for decades. Partners include AWS, Apple, Cisco, CrowdStrike, Google, JPMorgan Chase, Microsoft, and Nvidia. Anthropic has committed $100 million in usage credits under the programme. Anthropic has told US government officials that Mythos capabilities are part of why "the US and its allies must maintain a decisive lead in AI technology," framing access as a geopolitical question, not just a cybersecurity one. MediaNama founder Nikhil Pahwa writes in Reasoned: "Strategic technologies do not distribute their benefits evenly, even when their risks are universal. The strategic benefit flows first to the US and its allies. Mythos is built by a US company, access is gated by that company, and the capability is explicitly framed as part of maintaining technological lead. Meanwhile everyone is vulnerable."
Share
Copy Link
India is negotiating with the US administration and Anthropic to secure access to Claude Mythos, a powerful AI model that can detect software vulnerabilities, after being excluded from the initial 40-organization rollout. The restricted release has sparked urgent government meetings and renewed calls for state-sponsored AI entities to build India AI capability and ensure strategic autonomy in an era of high-capability AI models.
Anthropic's restricted launch of Claude Mythos has placed India in an uncomfortable position. The powerful AI model, capable of autonomously identifying software vulnerabilities across major operating systems and browsers, was initially granted to roughly 40 organizations under Project Glasswing—none of them Indian
2
. The Ministry of Electronics and Information Technology (MeitY) Secretary S. Krishnan confirmed on April 28 that the government is working out the logistics with the US administration to include Indian entities in the program2
.
Source: MediaNama
The exclusion carries significant cybersecurity concerns for a nation managing critical digital infrastructure. Nasscom wrote separately to Anthropic, arguing that Indian firms maintain critical code used by organizations worldwide. "As AI systems evolve to autonomously identify and chain vulnerabilities across platforms, the potential for cascading, cross-border risks becomes significantly higher," Nasscom stated in its letter
2
. The restricted AI model release has exposed India's vulnerability in an era where access to Claude Mythos and similar tools may determine who can defend their systems and who remains exposed.Finance Minister Nirmala Sitharaman and IT Minister Ashwini Vaishnaw convened a high-level meeting on April 23 with senior officials from the Reserve Bank of India, National Payments Corporation of India, MeitY, the Department of Financial Services, and the Indian Banks' Association. Sitharaman characterized the threat as "unprecedented"
2
. The urgency reflects the reality that malign actors could exploit software flaws at scale if they obtain similar capabilities.A senior government official told ET that rival companies could release similar models without warning. "Currently, Anthropic has held off the wider release, but tomorrow more companies can launch such models. They may release them without advance notice. The government needs to build its capacity as of yesterday," the official said
2
. That concern materialized quickly when OpenAI launched GPT-5.4-Cyber with tiered access under its Trusted Access for Cyber programme, demonstrating exactly the risk officials feared2
.The situation worsened when Bloomberg reported that unauthorized users gained access to Mythos through a third-party vendor environment on launch day, fueling concern among governments worldwide that critical systems may already be at risk
2
.Even if India secures access to Claude Mythos through Project Glasswing, a fundamental compliance problem remains. India's 2018 data localization rules require payment system providers to store all transaction data on servers within India, while Mythos runs on strictly controlled US-based servers
2
. The National Payments Corporation of India faces this direct conflict but has not publicly resolved how to reconcile domestic regulations with the need to detect software vulnerabilities in critical payment infrastructure.Under the Digital Personal Data Protection Act, 2023, companies like Airtel and Vodafone Idea qualify as data fiduciaries, meaning a successful AI-driven breach of their systems would require mandatory user notification and a detailed report to the Data Protection Board within 72 hours, with fines up to Rs 200 crore for non-compliance
2
. The stakes extend beyond immediate access to questions of sovereignty and regulatory coherence.The Claude Mythos episode has intensified arguments for building India AI capability through government-led initiatives. India produces approximately 1.5 million engineers annually, second only to China's 2 million, and has the world's largest cohort of people aged 20-30 at 245 million
1
. Yet this talent pool remains underutilized in developing frontier AI models that could ensure strategic autonomy.Historical precedent exists for state-led technological breakthroughs. C-DAC delivered on its mandate to produce a supercomputer within three years and with a budget of Rs 30 crore, producing PARAM 8000, a parallel processing system that ranked second among the world's supercomputers in 1991
1
. Similar state-sponsored AI entities could mobilize India's engineering talent to create strategic AI capability rather than relying on access negotiations with foreign companies.Related Stories
Anthropic has told US government officials that Mythos capabilities are part of why "the US and its allies must maintain a decisive lead in AI technology," explicitly framing access as a geopolitical question rather than merely a cybersecurity one
2
. Project Glasswing partners include AWS, Apple, Cisco, CrowdStrike, Google, JPMorgan Chase, Microsoft, and Nvidia, with Anthropic committing $100 million in usage credits2
.The restricted release of high-capability AI models creates a technology divide where strategic benefits flow primarily to the US and its allies, while risks remain universal. MediaNama founder Nikhil Pahwa observed: "Strategic technologies do not distribute their benefits evenly, even when their risks are universal. The strategic benefit flows first to the US and its allies"
2
. For India, an aspiring large power, this asymmetry threatens strategic autonomy and highlights the urgency of developing homegrown alternatives.Concerns about computing power for developing frontier AI models may be overstated. DeepSeek, the Chinese AI company, recently announced its latest offering developed on chips designed and manufactured by Huawei, demonstrating that alternatives to US-dominated chip supply chains exist
1
. This suggests India need not wait for private companies to bridge the technology gap but can pursue state-led initiatives to build the necessary infrastructure and capabilities.The Centre is weighing a broader policy framework for future launches of high-capability AI models and is reluctant to enable access for some companies while excluding others
2
. Whether India secures access to Claude Mythos or not, the episode has clarified that relying on foreign companies and the US administration for critical cybersecurity tools leaves the nation vulnerable to decisions made elsewhere.Summarized by
Navi
22 Apr 2026•Policy and Regulation

14 Apr 2026•Technology

31 Jan 2025•Technology

1
Health

2
Technology

3
Technology
