Curated by THEOUTPOST
On Tue, 8 Apr, 4:04 PM UTC
2 Sources
[1]
Government weighing local storage of AI models to mitigate risk associated with them
The government is weighing local storage of AI models to mitigate any risk associated with them and prevent flow of data outside the country, a senior government official said on Monday. While speaking on the sidelines of Digital Threat Report 2024 for the BFSI sector by SISA and Cert-In, Ministry of Electronics and IT Secretary S Krishnan also shared with reporters that the rules of Digital Personal Data Protection (DPDP) Act may be in place within 6-8 weeks that will play key role in prevention of personal data leaks. When asked about government decisions on Chinese AI platforms and risk associated with them, Krishnan said that the government is watching LLMs (large language models) in terms of their usage and impact. "The true problem lies when data gets shared on a portal or on a mobile app because then the data may go out of the country and may sort of feed the way that a particular model is trained and a lot of the data may go out. On the private side, if the model itself is hosted in India, then the risks of data going out are far more mitigated considerably. All of these factors will be considered before any final decision is taken," Krishnan said. He said that personal data leaks will be checked once the DPDP Act is brought into force. "We have received the comments, detailed comments on the rules. We are examining them rule by rule. We are examining what the comments are and if any changes are required. We may have to complete another process of internal consultation after that is completed. I would estimate it will take about six to eight weeks," Krishnan said. Talking about cyber security developments, Krishnan said that there is greater awareness among people to report the incidents. "Traditionally, in India, people are warned not to report many of these breaches. So, it is a good thing that reporting is happening so that we understand what is the extent of the problem and where the problems are coming up," Krishnan said. He said the increase in reporting also implies that the country's overall surveillance is improving and it also means that we have more capacity to find out incidents happening across the entire cybersecurity space.
[2]
Centre Considering Local Storage Of AI Models To Prevent Data Flow
The MeitY secretary also highlighted the rise of cybersecurity incidents in India, signaling improved public awareness and enhanced surveillance capabilities The Centre is reportedly considering local storage of AI models to minimise risks associated with them and prevent flow of sensitive data outside India. This aligns with the government's broader efforts to strengthen cybersecurity infrastructure and safeguard data of citizens. With the Centre reportedly planning to finalise the Digital Personal Data Protection (DPDP) Act rules by April, Ministry of Electronics and Information Technology (MeitY) secretary S Krishnan said that once implemented it will play an effective role in preventing the personal data leaks. Notably, the act empowers the government to restrict cross-border data transfers and mandates robust security measures for handling personal data. As per news agency PTI, Krishnan also pointed out that the government is closely monitoring Chinese LLM models due to potential risks related to data misuse. "The true problem lies when data gets shared on a portal or on a mobile app because then the data may go out of the country and may sort of feed the way that a particular model is trained and a lot of the data may go out. On the private side, if the model itself is hosted in India, then the risks of data going out are far more mitigated considerably," he said. The MeitY secretary also highlighted the rise of cybersecurity incidents in India, signaling improved public awareness and enhanced surveillance capabilities. This comes a few days after Krishnan reiterated the need to develop more foundation models in India, focused on issues relevant to the country and languages spoken here. It is pertinent to note that India's focus on localising AI models and enforcing stringent data protection laws is intended towards establishing itself as a global leader in AI, but with national security. In line with this, India has also introduced initiatives like the IndiaAI Mission, aiming to foster AI development through public-private partnerships, GPU procurement and support for startups. Moreover, the data localisation under the DPDP Act also aligns with global trends where nations are imposing stricter controls over cross-border data flows. Driven by this data localisation requirement, global AI companies like OpenAI, Microsoft, Google, and Amazon are either looking to set up or expand their local data storage in India. India's homegrown AI industry has made significant progress in the past few years, led by the support from the government as well as investors. As a result, more than 200 GenAI startups have raised more than $1.2 Bn since 2020. While the likes of SarvamAI and Krutrim are building Indic LLMs, others like ObserveAI are leveraging AI to offer customised offerings to businesses. Besides, we are also witnessing across sectors AI usage to streamline user experience and operations, projecting the homegrown GenAI market to cross the $17 Bn mark by 2030.
Share
Share
Copy Link
The Indian government is weighing the option of local storage for AI models to mitigate risks and prevent data flow outside the country. This move aligns with broader efforts to strengthen cybersecurity and safeguard citizens' data.
The Indian government is contemplating a significant policy shift towards local storage of artificial intelligence (AI) models. This initiative aims to mitigate risks associated with these models and prevent the outflow of sensitive data from the country. S Krishnan, Secretary of the Ministry of Electronics and Information Technology (MeitY), revealed this development while speaking on the sidelines of the Digital Threat Report 2024 for the BFSI sector 1.
Krishnan emphasized that the government is closely watching Large Language Models (LLMs) in terms of their usage and impact. The primary concern lies in data sharing through portals or mobile apps, which could lead to data leaving the country and potentially being used to train AI models abroad. By hosting AI models within India, the risks of data outflow can be significantly mitigated 1.
The government is also in the process of finalizing the rules for the Digital Personal Data Protection (DPDP) Act. Krishnan estimates that these rules may be in place within 6-8 weeks, playing a crucial role in preventing personal data leaks. The Act is expected to empower the government to restrict cross-border data transfers and mandate robust security measures for handling personal data 2.
Krishnan noted an increase in cybersecurity incident reporting, indicating greater public awareness. This trend is viewed positively as it helps understand the extent and nature of cybersecurity problems. The rise in reporting also suggests improvements in the country's overall surveillance capabilities and capacity to detect incidents across the cybersecurity landscape 1.
The focus on localizing AI models aligns with India's broader ambition to establish itself as a global leader in AI while prioritizing national security. The government has introduced initiatives like the IndiaAI Mission to foster AI development through public-private partnerships, GPU procurement, and support for startups 2.
The data localization requirements under the DPDP Act are prompting global AI companies like OpenAI, Microsoft, Google, and Amazon to consider setting up or expanding their local data storage in India. Meanwhile, India's homegrown AI industry has seen significant progress, with over 200 GenAI startups raising more than $1.2 billion since 2020. Companies like SarvamAI and Krutrim are developing Indic LLMs, while others like ObserveAI are leveraging AI for customized business solutions 2.
Reference
[1]
India's MeitY Secretary emphasizes the need for a resilient ecosystem to store critical data within the country as AI adoption grows. TCS launches SovereignSecure Cloud to comply with data protection laws and support India's digital infrastructure.
2 Sources
2 Sources
India is making significant strides in developing its own AI foundational models, with the government receiving 67 proposals from various entities. This initiative aims to create a secure, cost-effective, and ethically sound AI ecosystem tailored to India's unique needs.
5 Sources
5 Sources
India's IT Ministry is working on a system to assess the safety and trustworthiness of AI solutions, focusing on innovation rather than strict regulations. The government aims to address concerns about AI's impact on jobs and data protection while promoting responsible AI development.
2 Sources
2 Sources
India's rapid progress in artificial intelligence development is encountering potential obstacles due to stringent privacy regulations. The country's AI sector growth may be hindered by data protection laws, raising concerns about the balance between innovation and privacy.
2 Sources
2 Sources
Major AI companies like OpenAI, Microsoft, and Meta face growing cybersecurity challenges in protecting their large language models from threats such as model pollution and data corruption.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved