7 Sources
7 Sources
[1]
India's AI Guidelines Draw Praise, and Caution | AIM
"If executed well, this could become the most exportable governance model of the decade." Unveiling of the India AI Governance Guidelines has prompted a wave of reactions from policy experts, legal commentators and AI governance specialists. While the framework's intent received praise, experts also sought clarity on how its principles will translate into operational safeguards. The Ministry of Electronics and Information Technology (MeitY), under the IndiaAI Mission, announced the guidelines on November 5 as a national framework to ensure the safe, inclusive and responsible adoption of artificial intelligence across sectors. Launched by principal scientific adviser Ajay Kumar Sood in the presence of top MeitY leadership, the document outlines India's most coordinated effort yet to guide AI development at scale ahead of the India AI Impact Summit 2026. At the
[2]
MeitY Reveals India's Big Plan to Govern Artificial Intelligence
Deeper penetration of AI in tier-2 and tier-3 cities is also a priority The Ministry of Electronics and Information Technology (MeitY) unveiled the artificial intelligence (AI) Governance Framework on Wednesday, providing detailed guidelines and key recommendations for policymakers. The guidelines were formally unveiled by Professor Ajay Kumar Sood, Principal Scientific Adviser to the Government of India, in the presence of other officials. It recommends the creation of new regulatory bodies, expanding existing laws, and amending articles to ensure a comprehensive approach towards the new technology. The AI Governance Framework Is Here A 68-page report was shared by MeitY, highlighting the grounding principles of how the country's policymakers should develop India's AI policies. At the core of the guidelines are principles such as respect for human rights, non-discrimination, safety, transparency, and fairness. The government highlights that AI systems must be trustworthy and inclusive, benefiting all communities, particularly those currently underserved. Rather than applying blanket restrictions, the framework takes a risk-based approach, meaning the level of oversight will depend on the potential harm and impact associated with the AI system. To operationalise these principles, the guidelines propose a phased implementation model. In the short term, organisations deploying AI in India are encouraged to adopt internal safety processes. These include conducting risk assessments, documenting data sources, and putting in place bias-checking and safety testing before models are released. The document calls for clear communication of AI system intent and capabilities wherever possible, along with establishing grievance redress mechanisms and reporting channels for incidents linked to AI systems. Over the next few years, the guidelines envision a coordinated oversight structure involving multiple ministries, regulators, and public institutions. A central governance mechanism is expected to steer and align regulation across sectors. For sectors considered high-risk, such as healthcare, financial services, and law enforcement, the document proposes tailored regulatory rules and compliance frameworks. In the long run, the guidelines anticipate a shift from voluntary industry self-governance to mandatory regulations for systems with high or critical risk implications. Continuous monitoring of real-world AI behaviour is expected to become standard, backed by a national AI incident database designed to improve oversight and public accountability. The plan also includes research and innovation sandboxes, and collaboration with international bodies on global norms for responsible AI. The guidelines propose the formation of new institutional mechanisms to coordinate AI oversight across the government. A key body among these is the AI Governance Group (AIGG), which is expected to act as the central anchor for policy alignment, risk-based governance, and cross-ministerial coordination. The AIGG would work with sectoral regulators, such as the Technology & Policy Expert Committee (TPEC) and the AI Safety Institute (AISI), to ensure that rules for high-risk applications are consistent yet tailored to specific domains such as healthcare, finance, or law enforcement. Finally, the guidelines place a strong focus on strengthening India's AI capacity through improved infrastructure and resources. This includes expanding access to high-performance computing, supporting the creation of high-quality, representative datasets, and enabling the development of locally relevant AI models.
[3]
Centre unveils AI governance rules for safe adoption
The Information Technology (IT) Act may have to be amended to classify artificial intelligence systems, and there may be a need to create an India-specific risk assessment framework for AI, a national database for incidents and new institutions to oversee these, as per the national AI governance guidelines. While non-binding, the guidelines released on Wednesday will serve as the template for India's national vision for AI leading up to the government's flagship IndiaAI Impact Summit in February next year, principal scientific advisor Ajay Sood said. The IT Act needs to be updated to clarify how AI systems are classified and how liability is apportioned across the AI value chain, says the guideline document, developed under the Ministry of Electronics and Information Technology (MeitY). Arguing that the Act's current definition of 'intermediary' -- which includes telecom service providers, search engines and even cyber cafes -- is broad, it calls for clarity on how to classify modern AI systems that can even generate data autonomously, and clear obligations and liability for AI developers and deployers. While suggesting that existing regulations can be applied to address current AI risks, it recommends amendments to update copyright laws to enable large-scale training of AI models and rules on how digital platforms are classified. "We have attempted at various points in time to examine how we can sort of tighten something. Government has also made it very clear we will act when there is a need to," MeitY secretary S Krishnan said. In a bid to cull rapidly growing deepfake, MeitY last month floated draft amendments to the IT rules, mandating a declaration from all social media users when posting AI-generated or modified content and asking social media platforms to deploy technical measures for verification. Global signal The guidelines have been prepared after studying similar policies in the US, European Union and China, and the government hopes this will set the tone for AI governance in the Global South, especially in countries without necessary resources, Sood said. As a result, the government will showcase it at the February Summit, officials said. The government had set up a high-level advisory group in 2023 under the chairmanship of the principal scientific advisor to examine various issues relating to AI. A sub-committee formed in July by MeitY under the advisory group and chaired by IIT-Madras professor Balaraman Ravindran, crafted the guidelines. It received more than 2,500 submissions from government bodies, academic institutions, think tanks, industry associations and the private sector, Ravindran said on Wednesday. AI risk assessment The guidelines have called for developing an India-specific risk assessment framework that reflects real-world evidence of harm. "Encourage compliance through voluntary measures supported by techno-legal solutions as appropriate. Additional obligations for risk mitigation may apply in specific contexts, for e.g., in relation to sensitive applications or to protect vulnerable groups," the document reads. The guidelines have called for setting up a national database of 'AI incidents', which is expected to give policymakers insights into the real-world risks and harms posed by AI systems and also become an important national security tool. "The database should be a national-level centralised system with the ability to query and collect data from smaller, local databases in a federated manner. Local databases may be set up and maintained by authorised entities or sectoral regulators," it says. A host of new institutions have also been recommended to carry out major recommendations including the issues of content authentication. These include a new body called the 'AI Governance Group' to coordinate policy on AI governance across ministries. Envisaged to be a small, permanent and effective inter-agency body responsible for overall policy development and coordination, it should be supported by a technology & policy expert committee on strategy and implementation, the guidelines say. Meanwhile, the recently established AI Safety Institute should act as the main body responsible for guiding safe and trusted development and use of AI in India. The document also pushed for vigorously adopting the regulatory sandboxes model to enable the development of cutting-edge technologies in constrained environments affording reasonable legal immunities. Organisations deploying AI systems should also establish accessible grievance redressal mechanisms under the Grievance Appellate Committee process under the IT Rules, it says.
[4]
MeitY unveils India AI Governance Guidelines for safe and responsible AI
The Ministry of Electronics and Information Technology (MeitY), through the IndiaAI Mission, has published the India AI Governance Guidelines, providing a structured approach for the secure, inclusive, and accountable deployment of AI across industries. The guidelines were formally introduced by Prof. Ajay Kumar Sood, Principal Scientific Adviser to the Government of India, alongside Shri S. Krishnan, Secretary, MeitY; Shri Abhishek Singh, Additional Secretary, MeitY, CEO IndiaAI Mission, DG NIC; Ms. Kavita Bhatia, Scientist 'G' & GC, MeitY & COO IndiaAI Mission; and Prof. B. Ravindran, IIT Madras. The event also included Dr. Preeti Banzal and Dr. Parvinder Maini from the Office of the Principal Scientific Adviser. The launch comes ahead of the India-AI Impact Summit 2026, underscoring India's commitment to advancing responsible AI governance. The guidelines outline a clear framework to encourage innovation while protecting individuals and society from potential risks. Key components include: The framework centers on the principle of "Do No Harm", aiming to create controlled innovation spaces and risk mitigation strategies within an adaptable system. The guidelines emphasize making AI accessible, inclusive, and trustworthy, supporting the development of a robust ecosystem that promotes safe and responsible innovation across India and inspires collaboration internationally. The document was drafted by a high-level committee chaired by Prof. Balaraman Ravindran, IIT Madras. Members include Shri Abhishek Singh, Ms. Debjani Ghosh (NITI Aayog), Dr. Kalika Bali (Microsoft Research India), Mr. Rahul Matthan (Trilegal), Mr. Amlan Mohanty (NITI Aayog), Mr. Sharad Sharma (iSPIRT Foundation), Ms. Kavita Bhatia, Mr. Abhishek Aggarwal (MeitY), Mr. Avinash Agarwal (DoT), and Ms. Shreeppriya Gopalakrishnan (IndiaAI). These guidelines are intended as a reference for policymakers, researchers, and industry professionals to promote safe, responsible, and inclusive AI adoption both nationally and internationally. The launch also recognized winners of the IndiaAI Hackathon for Mineral Targeting, organized with the Geological Survey of India (GSI) and Ministry of Mines: The India AI Impact Summit 2026, set for 19-20 February in New Delhi, will convene global leaders, policymakers, industry experts, and researchers to discuss AI's potential in advancing People, Planet, and Progress. In conclusion, Shri S. Krishnan, Secretary, MeitY, highlighted:
[5]
MeitY's AI Governance Guidelines on Liability & Innovation
Recently, the Ministry of Electronics and Information Technology (MeitY) published the final version of "AI Governance Guidelines," which is aimed to provide a framework that "balances AI innovation with accountability, and progress with safety." As India currenlty lacks the AI-domain specific laws in place, the recently published MeitY's AI Governance Guidelines can serve as a defining policy document that outlines the current regime's intent on how it wants to regulate AI and make legislative changes around it and how it wants to approach AI policies strategically, especially when India is still vulnerable to the global policies, especially from the US's tarrifs and China's swift AI advancements. For some context, the Government of India established an advisory group in 2023 under the Principal Scientific Advisor of MeitY to examine AI-related issues and regulation in India. A subcommittee on AI governance, led by IIT-Madras professor Balaraman Ravindran, was formed in July 2025 to develop these Guidelines. After initially releasing a draft report in January that received over 2,500 public submissions, this subcommittee drafted these final Guidelines with a bunch of key recommendations to the government as well as other regulators. The following article outlines the key recommendations briefly and then expands on the Committee's reasoning behind these recommendations. Based on this feedback, MeitY formed a drafting committee to finalise India's AI governance framework. Its mandate includes recommending a governance model that balances innovation and risk, building trust for sustainable AI growth, and providing practical guidelines and principles for industry, regulators and sectoral agencies. While affirming that many existing Indian laws can cover the risks that could emerge from AI, such as malicious impersonation of individuals or personal data breaches, the Guidelines acknowledge the urgent need to conduct a comprehensive review of the relevant laws to identify regulatory gaps in the AI ecosystem. Citing one such instance, the report noted the need for reviewing India's Pre-Conception and Pre-Natal Diagnostic Techniques (PC-PNDT) Act, which aims to prevent the sexist sex-selective abortion practices in India. "The Act should be reviewed from the perspective of AI models being used to analyse radiology images, which could be misused to determine the sex of a foetus and enable unlawful sex selection," reads the Guidelines. The policy recommendation guidelines state that inter-ministerial consultations are underway to examine these regulatory issues. Referring to the two-decade-old IT Act, the guidelines say the government must clearly define the roles of actors in the AI value chain, such as developers, deployers, and users, and specify how current definitions, including "intermediary," "publisher," and "computer system," will govern them. The committee acknowledged that all AI systems are inherently probabilistic in nature and can produce unexpected outcomes that cause harm despite reasonable precautions. Thus, it cited the RBI's FREE-AI Committee, which recommended a "tolerant" approach in the financial sector toward first-time or one-off errors. The committee said sectoral regulators should choose enforcement strategies suited to their domains, but emphasised that the rule of law must remain paramount and enforcement should focus on preventing harm while allowing responsible innovation. The Guidelines question the potential immunity that could be granted to AI systems in the name of a vague and broad interpretation of "intermediaries" under Section 79 of the IT Act, which provides safe harbour protections to intermediaries for harmful content hosted on their platforms and applications. It argued that under the current broad frameworks of laws, including those governing telecom companies, search engines, and even cyber cafes, these entities could also be classified as "intermediaries," which are defined as entities that receive, host, or transmit content on behalf of the sender/user. Furthering this argument, it stated that such legal immunity wouldn't be applicable to the so-called "intermediary" AI systems and suggested that the existing legal framework needs to be re-examined from the perspective of liability for AI developers and deployers if they fail to comply with the required due diligence. "Therefore, the Committee is of the view that the IT Act should be suitably amended to ensure that India's legal framework is clear on how AI systems are classified, what their obligations are, and how liability may be imposed," reads the policy recommendation guidelines. The much-awaited finalisation of the DPDP Rules, which enforce the DPDP Act, 2023, excludes publicly available data from the law's purview, thereby enabling the massive scraping of publicly available information, including personal data, which the DPDP Act aims to protect. This means that many AI-based companies are set to benefit from this provision. The Guidelines refer to the following key aspects that need to be addressed, formulating a new legal framework with amendments if required: In order to prevent the harmful content generation nd the distribution, like child sexual abuse material (CSAM) and non-consensual images ('revenge porn'), the Guidelines recommend the proactive measures like not only meta-labelling and visual labelling for AI-generated content but also the traceability of the origins of the content as well. For more context, the recent draft amendment to the IT Rules, 2023, mandates such labelling, including the possible existing industry standard labelling protocol, such as the Coalition for Content Provenance & Authenticity (C2PA). Referring to these amendments, S. Krishnan, Secretary at MeitY, appreciated the government's amendment and stated that users have the right to know whether the content being produced and distributed is a deepfake or not. Regarding the aspect of traceability, the Guidelines suggest that current watermarking tools can identify the origins of AI-generated content and the underlying databases or large language models (LLMs) that produced it. "Such attribution tools have potential utility for both content authentication and provenance," affirms the Guidelines. It also cautioned that any bypassing mechanisms also need to be examined, as they could put citizens at risk. The guidelines recommend that the proposed AI Governance Group, supported by the Technology and Policy Expert Committee, review India's regulatory framework on content authentication and advise agencies such as MeitY on appropriate techno-legal solutions and additional legal measures to address AI-generated deepfakes. The Guidelines avoided addressing matters related to copyright issues, as they are still awaiting recommendations from the Department for Promotion of Industry and Internal Trade (DPIIT), which established the committee in April 2025 to study the use of copyrighted material for AI training. At the core, the committee is set to examine, under the Indian Copyright Act, whether AI training would qualify for the fair use doctrine or not. When compared to the US' AI Action Plan and China's Global AI Governance Action Plan, the report asserts that India's balanced approach benefits countries in the Global South. "India should continue its participation in multilateral AI governance forums, such as the G20, UN, OECD, and deliver tangible outcomes as host of the 'AI Impact Summit' in February 2026," recommends the Guidelines while emphasising the importance of research, policy planning and simulation exercises that can address the future issues. The guidelines identify several key risks that could affect end users. According to the committee, these include: To address such risks amplified and produced with the help of AI systems, the guidelines define a national-level centralised public incident reporting database-based reporting mechanism that records the series of events where malfunctioning of AI systems has occurred, including the harms related to "health, disruption of critical infrastructure, human rights violations, or damage to property, communities, or the environment." Indian Computer Emergency Response Team (CERT-In)-IN is one such example of an existing incident reporting mechanism. "Over time, a structured feedback loop should be created: reports feed into threat analysis, which helps policymakers identify emerging risks, understand patterns of harm, and strengthen oversight. This process will also build a culture of accountability," assures the report. To encourage large-scale adoption of voluntary measures, the committee recommends a set of financial, reputational, technical, and regulatory incentives, including: Referring to the indigenously developed NITI Aayog's Data Empowerment and Protection Architecture (DEPA), the Guidelines suggested that the "techno-legal system for permission-based data sharing through consent tokens" can be expanded for effective AI governance, including the following aspects: The Guidelines recommend that the government provide "targeted incentives and financing support, including tax rebates on certified solutions, AI-linked loans," to support AI development and adoption in the MSME sector. The policy also suggests offering subsidised access to GPUs. "This will help lower the cost of adoption, if supported by sector-specific AI toolkits and pre-built starter packs tailored to industries like textiles, retail, logistics, and food processing," reads the Guidelines. The policy recommends that the Small Industries Development Bank of India (SIDBI) and the Micro Units Development & Refinance Agency Ltd (MUDRA) disperse these loans. For more context on how the Indian government is financing AI developments, we can examine the budget allocation of the India AI Mission, which is Rs. 10,372 crores. However, under this Mission, the Safety and Trust category is severely underfunded, accounting for only 0.2% of the total budget allocated to the Mission. While mentioning the infrastructure initiatives that the India AI mission took, the AI Governance guideline explained its current status as of August 2025:
[6]
India Adopts Hands-Off AI Governance to Boost Innovation
India's New AI Governance Guidelines Signal Innovation-First, Regulation-Lite Approach The Ministry of Electronics and Information Technology, or MeitY, has announced the India AI Guidelines, marking a deliberately hands-off approach to the regulation of artificial intelligence. The new framework aims to strike a balance between innovation and safeguards, replacing the earlier, high-risk draft released for consultation in January 2025. Drafted by a committee led by Prof Balaraman Ravindran of IIT Madras, the guidelines reframe India's AI governance strategy from control to enablement. "We are calling this the AI Governance Guidelines, not AI regulation, because we don't want it to be seen as something that throttles AI adoption," said Ravindran.
[7]
India Releases AI Governance Guidelines, Avoids Regulation
The Ministry of Electronics and Information Technology (MeitY) has released the India AI Governance Guidelines under the India AI Mission, aimed at building a "safe AI innovation ecosystem" in the country. This final version follows the draft report released in January 2025, which outlined the government's plans for a new regulatory framework on AI governance while promoting self-regulation within the AI industry. During the keynote address, S. Krishnan, Secretary at MeitY, said the government had made a deliberate decision not to regulate AI. He stated, "There's been a conscious and deliberate approach of not leading with regulation and of making sure that every opportunity is provided to be made." These AI governance guidelines come ahead of the Indian government's flagship AI event, the India AI Impact Summit, scheduled for February 2026 in New Delhi. "As you prepare for the Impact Summit, this release of the AI governance guidelines is timely," said Abhishek Singh, CEO of the India AI Mission. Singh noted that the government received over 650 inputs on the January draft guidelines. A subcommittee formed to assess the feedback met nearly 20 times. "These guidelines will be a cornerstone as we move forward in developing AI for India, and they can actually become a role model for AI governance globally," Singh added in his opening address. Professor Balaraman Ravindran, Head of the Department of Data Science and AI at IIT Madras and Chair of the committee that drafted the AI Governance Guidelines, which reportedly began work in 2020, said that this initiative is not a step toward regulating AI in India. He affirmed that the government aims to foster AI-led technological innovation. "We are not calling it the AI Regulation of India or anything like that because we don't want it to be viewed as something that is coming to throttle AI development and adoption in India," Ravindran said. "The Governance Guidelines is as much about enabling AI adoption and making it truly impactful for the nation as they are about avoiding the risks and misuse of AI." He added that the guidelines aim to create a more adaptive ecosystem and regulatory environment that allows innovation while ensuring responsible AI. To enable broader AI-led participation, Ravindran suggested integrating AI with India's existing digital public infrastructure for "better governance and to incentivise MSME (Micro, Small and Medium Enterprises) adoption" of AI technology. This way, he said, "AI doesn't remain in the prison of large companies." Ravindran said the committee adopted the seven principles from the Reserve Bank of India's three AI reports, calling them a "sound basis for building future regulation." He added that the AI governance framework rests on six key pillars: Ravindran referred to the RBI's FREE-AI Committee Report, where FREE-AI stands for Framework for Responsible and Ethical Enablement of Artificial Intelligence. The report outlines seven core "sutras" or principles for AI governance in the financial sector: Ravindran recommended creating an AI Governance Group with multi-sector participation to coordinate policy and develop "technology-led tools for compliance by design." He also highlighted the role of the AI Safety Institute under the India AI Mission in risk assessment, standard evaluation, and global engagement. He said sector-level regulators must participate, as they would address domain-specific governance, risks, and inputs. Ravindran's recommendations for future policy steps include: Ajay Kumar Sood, Principal Scientific Adviser at MeitY, said India must prepare for a future where artificial general intelligence (AGI), once developed to match human rationality -- could emerge much faster than anticipated. "I thought it was five years away, but now they are saying it's two years away," he said, noting that major investments in companies like OpenAI could accelerate progress even further. While it remains unclear when this rapid growth will reach the saturation point of the S-curve, he said its impact will be significant. Sood added that India must secure the necessary resources and develop "the disruptive algorithms which will really make it happen." He emphasised that the future looks promising if the country balances technological advancement with the "sensitivities of our culture, our heritage, and all sections of society," while strengthening its knowledge base. Although not directly part of the current AI governance guidelines, S. Krishnan also commented on the recent draft amendment to the IT Rules, 2023, which aims to mandate the labelling of AI-generated content. Krishnan said the government swiftly addressed the need for such labelling while finalising the AI governance report. He clarified that the amendment aims to establish a basic transparency requirement, not a restrictive regulation. "All that has been asked for is labelling," he said, noting that the step is technologically feasible and rooted in the public's right to know. "People who are viewing, hearing, or reading content should have the right to know that it has been synthetically generated," Krishnan added.
Share
Share
Copy Link
India's Ministry of Electronics and Information Technology releases national AI governance framework emphasizing risk-based approach, institutional reforms, and amendments to existing laws. The guidelines aim to establish India as a leader in responsible AI governance ahead of the 2026 AI Impact Summit.
The Ministry of Electronics and Information Technology (MeitY) unveiled India's comprehensive AI Governance Guidelines on November 5, marking a significant milestone in the country's approach to artificial intelligence regulation.
1
The 68-page framework was formally launched by Professor Ajay Kumar Sood, Principal Scientific Adviser to the Government of India, alongside senior MeitY officials including Secretary S. Krishnan and Additional Secretary Abhishek Singh.2

Source: NDTV Gadgets 360
The guidelines represent India's most coordinated effort to guide AI development at scale, developed under the IndiaAI Mission ahead of the India AI Impact Summit scheduled for February 2026.
3
The framework emerged from extensive consultation, receiving over 2,500 submissions from government bodies, academic institutions, think tanks, industry associations, and private sector entities.4
At the heart of the guidelines lies the "Do No Harm" principle, emphasizing respect for human rights, non-discrimination, safety, transparency, and fairness.
2
Rather than implementing blanket restrictions, the framework adopts a sophisticated risk-based approach where oversight levels depend on the potential harm and impact associated with specific AI systems.
Source: Economic Times
The guidelines mandate that AI systems must be trustworthy and inclusive, particularly benefiting underserved communities.
2
This approach recognizes that all AI systems are inherently probabilistic and can produce unexpected outcomes despite reasonable precautions, necessitating a nuanced regulatory response.5
The framework proposes a three-phase implementation model. In the short term, organizations deploying AI in India are encouraged to adopt internal safety processes, including conducting risk assessments, documenting data sources, and implementing bias-checking mechanisms before model release.
2
The medium-term vision involves establishing a coordinated oversight structure across multiple ministries, regulators, and public institutions. For high-risk sectors such as healthcare, financial services, and law enforcement, the document proposes tailored regulatory rules and compliance frameworks.
2
Long-term plans anticipate a shift from voluntary industry self-governance to mandatory regulations for systems with high or critical risk implications, supported by continuous monitoring and a national AI incident database.
2
The guidelines propose creating several new institutional mechanisms to coordinate AI oversight across government. The centerpiece is the AI Governance Group (AIGG), designed to act as the central anchor for policy alignment, risk-based governance, and cross-ministerial coordination.
2
Supporting the AIGG, the framework envisions a Technology & Policy Expert Committee (TPEC) and leverages the recently established AI Safety Institute (AISI) to ensure consistent yet tailored rules for specific domains.
3
These bodies will work together to develop an India-specific risk assessment framework reflecting real-world evidence of harm.Related Stories
The guidelines identify significant gaps in India's current legal framework, particularly regarding the two-decade-old Information Technology Act.
5
The document calls for amendments to clarify how AI systems are classified and how liability is distributed across the AI value chain, addressing concerns about the current broad definition of "intermediary" that includes telecom providers, search engines, and cyber cafes.Specific legal reviews are recommended, including updates to copyright laws to enable large-scale AI model training and examination of sector-specific regulations like the Pre-Conception and Pre-Natal Diagnostic Techniques Act from an AI perspective.
5
The guidelines have been developed after studying similar policies in the United States, European Union, and China, with the government positioning this framework as potentially "the most exportable governance model of the decade."
1
India aims to set the tone for AI governance in the Global South, particularly for countries lacking necessary resources for developing their own comprehensive frameworks.3

Source: Analytics Insight
Summarized by
Navi
[1]
[2]
[3]
12 Mar 2025•Policy and Regulation

12 Dec 2024•Policy and Regulation

29 Nov 2024•Policy and Regulation

1
Business and Economy

2
Technology

3
Business and Economy
