India AI Impact Summit exposes critical gaps in AI policy for child safety and wellbeing

2 Sources

Share

At the India AI Impact Summit in New Delhi, experts from IEEE, UNESCO, OECD, and Childlight revealed that India's AI policy framework lacks child-specific safeguards despite a 1,325% surge in AI-generated sexual abuse material. The summit brought together global institutions to address urgent gaps in AI governance, proposing measures like a Youth Safety Advisory Council and mandatory child rights impact assessments for high-interaction AI systems.

India AI Impact Summit Highlights Urgent Need for Child-Specific Safeguards

The India AI Impact Summit held in New Delhi exposed a critical weakness in India's emerging AI policy framework: the absence of child-specific safeguards despite mounting evidence of AI-enabled harm. Zoe Lambourne, chief operating officer at Childlight, delivered stark findings during a session titled "Safeguarding Children in India's AI Future: Towards Child-Centric AI Policy and Governance." She revealed that in 2024, over 300 million children worldwide became victims of technology-facilitated abuse or exploitation, with a staggering 1,325% increase in AI-generated sexual abuse material in just one year

1

. Lambourne warned that AI safety cannot stop at product design but must extend through monitoring, rapid response systems, child helplines, and survivor compensation.

Source: Digit

Source: Digit

AI Governance Gaps Leave Young Users Vulnerable

Experts at the summit repeatedly emphasized that while India has horizontal digital regulations like the Information Technology Rules and the Digital Personal Data Protection (DPDP) Act, these frameworks fail to address AI-mediated harms specifically targeting children

1

. Drawing from a Childlight poll of 410 young people across India, Lambourne noted that children see AI as powerful and beneficial, but not safe by default. Only one in four young Indians describe their online life as safe, with young women notably more likely to describe online spaces as unsafe and stressful. Nearly 48% of respondents place primary responsibility for online safety on technology companies, followed by parents and national governments

1

.

Source: MediaNama

Source: MediaNama

Global Experts Warn of Two-Speed Problem in AI Safety

Amir Banifatemi from AI Commons identified a fundamental structural gap: "We're facing a two-speed problem. On one side you have institutions and regulators coming up with frameworks based on principles, but at the same time we see an increasing growth of AI deployment at a very rapid scale"

2

. This chasm between policy development and innovation deployment means regulatory measures fall behind instead of being preventive. Banifatemi pointed to three escalating issues: people misusing systems, models making mistakes due to poor data or lack of transparency, and a third wave where autonomous agents may change objectives, creating deeper epistemic crises around trust and information integrity

2

.

Lack of Digital Literacy Compounds Risk for Adolescents

Karine Perset from OECD brought a personal perspective to AI safety for youth, observing how quickly generative AI has outpaced digital literacy among younger users. "I see what my teenagers do with their social media and chatbots, and many things that I don't see. The things I do see are pretty scary because they show that they're not prepared, they're not equipped to deal with so much information," she stated

2

. Younger users lack the contextual filters adults possess, making them particularly vulnerable to misinformation, deepfakes, and algorithmic manipulation. Perset emphasized that no single actor can address these challenges alone, requiring collaborative cross-disciplinary efforts combining policy innovation with technical solutions.

UNESCO Calls for Youth Participation in Shaping Trustworthy AI Systems

Mariagrazia Squicciarini from UNESCO captured the disorientation of synthetic media environments: "The inherent challenge of the AI era is the difficulty of distinguishing what is real from what is not. It's like walking in a dark room without being able to see anything—yet we do this every day online"

2

. When coupled with hyper-personalization and information overload, risks multiply rapidly for young users. Squicciarini argued that young people must not only be protected but actively included in shaping AI governance solutions, noting that education, cognitive skills, and emotional development are intertwined with AI literacy. "There is nothing more dangerous than taking trustworthiness of information for granted," she emphasized

2

.

Shift from Child Safety to Child Wellbeing Framework

Gaurav Aggarwal of the iSPIRT Foundation, speaking as a volunteer chairing an Expert Engagement Group constituted by the Ministry of Electronics and Information Technology (MeitY), explained a deliberate reframing from "child safety" to "child wellbeing." Aggarwal argued that safety alone is too narrow and paternalistic, whereas wellbeing better reflects both risks and benefits AI creates for children

1

. He pointed to children in rural areas where AI tools expand access to education and learning opportunities otherwise unavailable, arguing that AI governance frameworks must account for positive use cases alongside harms.

Proposed Institutional Measures and Child Rights Impact Assessments

Chitra Iyer, co-founder of Space2Grow, outlined comprehensive recommendations focusing on building institutional and policy infrastructure around child wellbeing. Proposed measures include establishing a Child Safety Solutions Observatory to aggregate innovations and research, a Global South Working Group on child wellbeing to shape policy narratives rooted in contexts like India, and an innovation sandbox to test interventions against AI-enabled harms

1

. Critically, recommendations call for a Youth Safety Advisory Council to ensure meaningful participation of children in policy design, strengthening legal frameworks to explicitly address AI-generated child sexual abuse material, and mandating child rights impact assessments for high-interaction AI systems used by children. Greater investment in digital resilience and AI literacy also features prominently in the proposed accountability measures

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo