Curated by THEOUTPOST
On Fri, 20 Sept, 8:04 AM UTC
2 Sources
[1]
India's AI Strides may face Privacy Law Headwinds
A host of companies including information technology firms, banks and cloud storage providers are seeking legal advice amid apprehensions about their use of generative artificial intelligence (GenAI) running afoul of the provisions of the data law, said industry executives.A host of companies including information technology firms, banks and cloud storage providers are seeking legal advice amid apprehensions about their use of generative artificial intelligence (GenAI) running afoul of the provisions of the data law, said industry executives. Many companies are building proprietary GenAI models without enough transparency about the use of personal data being processed for training purposes, experts said, adding that this could go against the principles of lawful consent, fairness and transparency as prescribed in the Digital Personal Data Protection (DPDP) Act. "Ideally, using publicly available data for GenAI training without appropriate consent stands in conflict with DPDP or copyright laws," said Joebin Devassy, senior partner at Desai & Diwanji. DPDP Act provides for protection of personal data while allowing the processing of such data for lawful purposes. The legislation was passed by Parliament in August last year. With privacy being a fundamental right, companies are worried about legal liabilities that could arise over non-compliance, said experts. GenAI models generate new output, learn and reason by themselves and adapt to new information, Devassy said, adding, "In such a flow, establishing breach of consent becomes challenging. AI is a complex animal in the court of law." Companies are consulting with lawyers on issues such as how to define the scope of their privacy policies to seek appropriate user consent, the kind of contractual obligations needed for data processors while offering AI-as-a-service and the global laws and regulations that apply to multinational data exchange. "The DPDP Act also mandates the principles of purpose limitation and data minimisation, whereas models trained on the same data are being used for multiple applications and there is uncertainty on whether the personal data being processed is limited to what is necessary. Further, under the Act, data fiduciaries cannot bundle all processing activities under a blanket consent," said Akshayy S Nanda, partner (competition law and data privacy practice) at Delhi-based law firm Saraf & Partners. He further said, "Can the model delete select parts of its memory? Or does it need retraining? Are companies ready to bear that cost? These are some of the pressing questions we hear." While scores of lawsuits over copyright infringement are pending in global courts without any strong precedents on GenAI's violation of citizens' rights, Indian companies want to future-proof themselves from legal shocks, said industry executives. Tata Consultancy Services (TCS), the world's second-most valued IT services brand, said it is continuously seeking to understand and navigate the evolving legal landscape through proactive risk management and adherence to regulatory standards. "This includes ensuring compliance with data privacy laws, like the EU's GDPR (General Data Protection Regulation) or India's DPDP Act," said Siva Ganesan, global head of AI.Cloud business unit, TCS. "By building robust governance frameworks and mechanisms for consent management and data retention, organisations can future-proof their business practices and continuously monitor global regulatory trends on IP, transparency and fairness." It is not just personal or publicly available data which could be misused, according to experts. Inferences about an individual are also considered personal, they said. "Inaccuracy and bias are the most critical concerns for companies who are experimenting with GenAI applications in marketing, hiring, digital lending, insurance claims, etc.," said Aadya Misra, counsel at Bengaluru-based Spice Route Legal. "Who is responsible if the model hallucinates or collapses? Is it the data fiduciary, or developer companies such as OpenAI?" However, experts said, doors must not be shut on large language models (LLMs) for fear of future legal setbacks. AI companies such as OpenAI and Google have started to indemnify their customers for any kind of lawsuits they may encounter because of their LLM use, said Paramdeep Singh, co-founder of Shorthills AI, which provides model training solutions. However, this may involve legal complexities, he said, adding, "AI applications are currently treated as experimental, and we as data processors, do not hold responsibility for inaccuracies and hallucinations. And so, our customers (data fiduciaries) do not force this as a contractual obligation." "Organisations do understand that AI deployments will always have some element of risk," said Vijay Navaluri, co-founder, Supervity.ai, which builds AI agents for clients including Daikin, Mondelez and Ultratech. "To address this, Supervity follows the PACT (privacy, accuracy, cost and time) framework, which helps companies to structurally think through what weight needs to be assigned to which areas." For instance, for highly sensitive data such as that in banking, financial services and insurance, and healthcare, companies prefer private LLMs which entail strict access controls and use of techniques like data masking and data anonymisation. In the case of applications in finance and accounting, which require the highest level of accuracy, all transactions are approved after human review, said Navaluri.
[2]
India's AI strides run into privacy law headwinds
Firms across sectors including IT, banking, and cloud storage are seeking legal guidance due to concerns that their use of generative artificial intelligence (GenAI) may not comply with data protection laws.A host of companies including information technology firms, banks and cloud storage providers are seeking legal advice amid apprehensions about their use of generative artificial intelligence (GenAI) running afoul of the provisions of the data law, said industry executives. Many companies are building proprietary GenAI models without enough transparency about the use of personal data being processed for training purposes, experts said, adding that this could go against the principles of lawful consent, fairness and transparency as prescribed in the Digital Personal Data Protection (DPDP) Act. The legislation, passed by Parliament in August last year, provides for the protection of the personal data of individuals while allowing the processing of such data for lawful purposes. With privacy being a fundamental right, companies are worried about legal liabilities that could arise over non-compliance, said experts. "Ideally, using publicly available data for GenAI training without appropriate consent stands in conflict with DPDP or copyright laws," said Joebin Devassy, senior partner at Desai & Diwanji. GenAI models generate new output, learn and reason by themselves and adapt to new information, he said, adding, "In such a flow, establishing breach of consent becomes challenging. AI is a complex animal in the court of law." Companies are consulting with lawyers on issues such as how to define the scope of their privacy policies to seek appropriate user consent, the kind of contractual obligations needed for data processors while offering AI-as-a-service and the global laws and regulations that apply to multinational data exchange. "The DPDP Act also mandates the principles of purpose limitation and data minimisation, whereas models trained on the same data are being used for multiple applications and there is uncertainty whether the personal data being processed is limited to what is necessary. Further, under the Act, data fiduciaries cannot bundle all processing activities under a blanket consent," said Akshayy S Nanda, partner (competition law and data privacy practice) at Delhi-based law firm Saraf & Partners. He further said, "Can the model delete select parts of its memory? Or does it need retraining? Are companies ready to bear that cost? These are some of the pressing questions we hear." While scores of lawsuits over copyright infringement are pending in global courts without any strong precedents on GenAI's violation of citizens' rights, Indian companies want to future-proof themselves from legal shocks, said industry executives. Tata Consultancy Services (TCS), the world's second most valued IT services company, said it is continuously seeking to understand and navigate the evolving legal landscape through proactive risk management and adherence to regulatory standards. "This includes ensuring compliance with data privacy laws, like the EU's GDPR (General Data Protection Regulation) or India's DPDP Act," said Siva Ganesa, global head of AI.Cloud business unit, TCS. "By building robust governance frameworks and mechanisms for consent management and data retention, organisations can future-proof their business practices and continuously monitor global regulatory trends on IP, transparency and fairness." It is not just personal or publicly available data which could be misused, according to experts. Inferences about an individual are also considered personal, they said. "Inaccuracy and bias are the most critical concerns for companies who are experimenting with GenAI applications in marketing, hiring, digital lending, insurance claims, etc.," said Aadya Misra, counsel at Bengaluru-based Spice Route Legal. "Who is responsible if the model hallucinates or collapses? Is it the data fiduciary? Or developer companies such as OpenAI?" Legal Guardrails However, experts said, doors must not be shut on large language models (LLMs) for fear of future legal setbacks. AI companies such as OpenAI and Google have started to indemnify their customers for any kind of lawsuits they may encounter because of their LLM use, said Paramdeep Singh, co-founder of Shorthills AI, which provides model training solutions. However, this may involve legal complexities, he said, adding, "AI applications are currently treated as experimental, and we as data processors, do not hold responsibility for inaccuracies and hallucinations. And so, our customers (data fiduciaries) do not force this as a contractual obligation." "Organisations do understand that AI deployments will always have some element of risk," said Vijay Navaluri, co-founder, Supervity.ai, which builds AI agents for clients including Daikin, Mondelez and Ultratech. "To address this, Supervity follows the PACT (privacy, accuracy, cost and time) framework, which helps companies to structurally think through what weightage needs to be assigned to which areas." For instance, for highly sensitive data such as that in banking, financial services and insurance, and healthcare, companies prefer private LLMs which entail strict access controls and use of techniques such as data masking and data anonymisation. In the case of applications in finance and accounting, which require the highest level of accuracy, all transactions are approved after human review, said Navaluri. These technologies are evolving at a breakneck speed, which laws cannot catch up with, said Spice Route's Misra. "More than AI law or regulation, we need AI ethics and principles. Self-regulation by companies or technology industry bodies is the need of the hour," he said.
Share
Share
Copy Link
India's rapid progress in artificial intelligence development is encountering potential obstacles due to stringent privacy regulations. The country's AI sector growth may be hindered by data protection laws, raising concerns about the balance between innovation and privacy.
India has been making significant strides in the field of artificial intelligence (AI), positioning itself as a global contender in this rapidly evolving technology. The country's AI sector has seen remarkable growth, with numerous startups and established tech companies investing heavily in AI research and development 1.
However, the burgeoning AI industry in India is now facing potential hurdles in the form of stringent privacy laws. The implementation of new data protection regulations aims to safeguard individual privacy but may inadvertently impede the progress of AI development 2.
The crux of the issue lies in finding a delicate balance between fostering innovation in the AI sector and ensuring robust protection of personal data. Industry experts argue that overly restrictive privacy laws could hamper the collection and utilization of large datasets, which are crucial for training advanced AI models 1.
Small and medium-sized AI startups are particularly vulnerable to these regulatory challenges. These companies often rely on access to diverse datasets to develop and refine their AI algorithms. The new privacy laws may limit their ability to collect and process data, potentially stifling innovation and competitiveness in the sector 2.
The Indian government faces the task of reconciling its ambitious AI goals with the need for robust data protection. Policymakers are under pressure to create a regulatory framework that encourages AI innovation while simultaneously addressing privacy concerns. This balancing act is crucial for maintaining India's position in the global AI race 1.
Leading figures in India's tech industry are calling for greater clarity and flexibility in the implementation of privacy laws. They emphasize the need for regulations that protect individual privacy without unduly restricting the AI sector's growth potential. Collaborative efforts between the government, industry stakeholders, and privacy advocates are being proposed to address these concerns 2.
India's situation reflects a global trend where countries are grappling with the dual challenges of promoting AI development and ensuring data privacy. As nations worldwide implement stricter data protection measures, the AI industry is adapting to a new regulatory landscape. India's approach to this issue could set a precedent for other emerging AI hubs 1.
Reference
[1]
[2]
Experts discuss the complexities of developing AI while adhering to privacy laws, highlighting the need for 'Privacy by Design' and addressing challenges in data governance and regulatory compliance.
3 Sources
3 Sources
The Global South faces unique challenges in balancing AI innovation with data protection, as discussed at PrivacyNama 2024. Issues include regulatory gaps, enforcement difficulties, and the complexities of using non-personal data in AI development.
4 Sources
4 Sources
India's government is actively promoting AI development through policies and initiatives, while enterprises are gradually adopting AI technologies. Investors are showing particular interest in fintech-focused vertical AI solutions.
4 Sources
4 Sources
India grapples with the decision between open and closed source generative AI models, weighing the benefits and challenges of each approach. The country's AI landscape is evolving rapidly, with startups and government initiatives playing crucial roles.
2 Sources
2 Sources
A comprehensive look at how Generative AI is transforming business strategies across various sectors in India, highlighting the balance between innovation and cost-effectiveness, and the challenges in adoption.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved