India's New DPDP Rules Set to Transform AI Data Governance and Compliance Standards

2 Sources

Share

India's Digital Personal Data Protection Rules introduce stringent consent-based frameworks that will significantly reshape how AI companies collect, process, and retain personal data, raising compliance standards across the industry.

New Regulatory Framework Reshapes AI Data Practices

India's newly notified Digital Personal Data Protection (DPDP) Rules are fundamentally transforming how artificial intelligence companies handle personal data, establishing stringent compliance requirements that will reshape the entire AI ecosystem. The rules, which work in conjunction with the DPDP Act 2023, introduce a consent-centric framework that applies to all digital personal data processed within India's jurisdiction

1

.

Source: Economic Times

Source: Economic Times

Legal and AI industry experts unanimously agree that these regulations will spur more rigorous governance across AI pipelines while creating new opportunities for responsible innovation. The framework requires companies to obtain free, specific, and informed consent for each specified purpose, fundamentally changing how AI training datasets are assembled and maintained

2

.

Impact on AI Model Development and Training

IndiaAI Mission chief executive Abhishek Singh emphasized that developers using personal data for training must now implement anonymization and privacy-preserving processes in line with the Act's requirements. "If anyone is having any data for training AI models, if there are personal data attributes there, then they have to do anonymisation, they have to do privacy preservation and then only use it for AI training," Singh explained

1

.

Source: MediaNama

Source: MediaNama

Supratim Chakraborty, partner at law firm Khaitan & Co, described the rules as marking a major shift for companies integrating AI into core products and workflows. "With AI now embedded in core systems, firms must rigorously audit how personal data is sourced, labelled, and used across model training and inference. Models that cannot evidence compliant data handling will not be viable in India's regulatory environment," he stated

1

.

Technical Challenges and Implementation Requirements

The new framework introduces significant technical challenges, particularly around data erasure and consent management. Companies must now design systems capable of selectively removing data from training pipelines when users withdraw consent, a requirement that could force fundamental changes in how AI models are developed and maintained

2

.

Nikhil Jhanji, Senior Product Manager at IDfy, emphasized that "traceability and explainability is now non-negotiable," recommending that teams "embed logging directly into the data pipeline so every ingestion, preprocessing, or training event leaves a verifiable trail"

2

.

Vaibhav Velhankar, cofounder and CTO at Segumento, highlighted the cultural shift required within the AI ecosystem. "Every dataset used in training must now have demonstrable consent, clear purpose limitation, proper labelling, traceability and secure handling. Models can no longer be trained on ambiguous datasets or undocumented workflows," he explained

1

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo