2 Sources
2 Sources
[1]
India AI Policy Lacks Child-Specific Safeguards: Experts
With Additional Inputs from Chaitanya Kohli & Prabhanu Kumar Das "Safety cannot just stop at product design and safety by design. It must continue through monitoring, rapid response, child helplines, and compensation for survivors, because something will go wrong," said Zoe Lambourne, chief operating officer (COO) at Childlight, warning that India's current AI governance approach fails to address how children are harmed in practice. Lambourne presented Childlight's research on child safety at the India AI Impact Summit in New Delhi during a session that examined how India's emerging AI policy framework is responding to risks faced by children using generative and high-interaction AI systems. Notably, the session, titled Safeguarding Children in India's AI Future: Towards Child-Centric AI Policy and Governance, brought together civil society organisations, platform representatives, academics, and legal experts. Speakers repeatedly pointed out that while India has horizontal digital regulations, including the Information Technology (IT) Rules and the Digital Personal Data Protection (DPDP) Act, it still lacks a child-specific legal framework to govern AI-mediated harms. Against this backdrop, Lambourne underscored how children themselves view AI. "So young people in India see AI as powerful and beneficial, but not safe by default," she said. Additionally, she noted, "While many young people describe online life as enjoyable and helpful, only one in four say it feels safe." Why experts say child-specific AI governance is needed Presenting Childlight's research, Zoe Lambourne outlined multiple, intersecting reasons why child-specific AI governance is being proposed: * Scale of harm:"In 2024, we calculated over 300 million children around the world were victims of some form of technology-facilitated abuse or exploitation," Lambourne said. * Sharp rise in AI-enabled abuse:"In fact, even in the last year, we've seen a 1,325% increase in AI-generated sexual abuse material," she added. * Evolving nature of exploitation:Lambourne said artificial intelligence is increasingly being used to create both real and synthetic child sexual abuse material, including nudification and deepfakes, while also enabling new forms of exploitation. * Children see AI as useful, but not safe:Drawing on a Childlight poll of 410 young people across India, Lambourne said children recognise both the benefits and risks of AI. * Gendered safety gap:"Young women, in particular, are notably more likely than young men to describe online spaces as unsafe, stressful, and mixed, and less likely to say they feel safe online at all," she said. * Where responsibility lies:"Nearly half of our respondents, 48%, place their primary responsibility for online safety on technology companies, followed by parents and carers and national governments," Lambourne said. What Exactly Is Being Recommended Why shift from "child safety" to "child wellbeing" Turning to policy responses, Gaurav Aggarwal of the iSPIRT Foundation explained that an Expert Engagement Group was constituted by the Ministry of Electronics and Information Technology (MeitY) to examine risks to children from AI systems and recommend governance measures. Aggarwal said he was speaking as a volunteer chairing the group on MeitY's behalf. Aggarwal said the group deliberately chose to reframe the issue from child safety to child wellbeing, arguing that "safety" alone is too narrow a lens. "We should probably change the name from child safety to child wellbeing," Aggarwal said. He added that safety can become a limiting and paternalistic concept, whereas wellbeing better reflects both the risks and the benefits AI creates for children. Importantly, Aggarwal pointed to children in rural areas, where AI tools can expand access to education and learning opportunities that are otherwise unavailable. Governance frameworks, he argued, must therefore account for positive use cases alongside harms. Institutional and policy measures proposed Chitra Iyer, co-founder of Space2Grow, said the recommendations focus on building institutional and policy infrastructure around child wellbeing in the context of AI. The proposed measures include: * A Child Safety Solutions Observatory to aggregate innovations, research, and best practices on AI-enabled child safety and wellbeing. * A Global South Working Group on child wellbeing, aimed at shaping policy narratives and solutions rooted in contexts such as India. * A child safety innovation sandbox to test interventions and safeguards against digital and AI-enabled harms. * A Youth Safety Advisory Council to ensure meaningful participation of children in policy design and governance. * Strengthening the legal framework to explicitly address AI-generated child sexual abuse material. * Mandating child rights and safety impact assessments for high-interaction AI systems used by children. * Greater investment in digital resilience and AI literacy for children, parents, and educators as preventive infrastructure. A slide listed the Expert Engagement Group's key recommendations on child wellbeing and AI governance at the India AI Impact Summit. Explaining why youth participation is critical, Iyer pointed to research showing how high-interaction AI systems are increasingly filling emotional and social gaps for children. "One of the girls in Bangalore said, 'I would rather speak to an AI chatbot and not even to my peers and my parents, because either I'll be trolled or judged,'" she said. Can platform design substitute legal accountability? Platform representatives highlighted design-level safeguards but acknowledged their limits. "At Snap, our fundamental start point on product is that the design of the product or the architecture of the product has a far more powerful effect on the experience of the user than anything that we can do afterwards," said Uthara Ganesh, APAC head of public policy at Snap Inc. Ganesh said Snapchat's design as a primarily one-to-one messaging platform reduces certain risks by default, alongside features such as age-aware responses, location turned off by default, and parental controls. She described these measures as iterative, noting that risks evolve faster than product safeguards. Later in the discussion, Ganesh said Snap's conversational AI, My AI, is designed to be age-aware, pause interactions if misuse is detected, and allow parents to disable the feature for their children through the platform's Family Centre. At LEGO Education, Atish Gonsalves said the company avoids generative AI entirely in child-facing products. "If we don't feel the tools are safe enough, they shouldn't be in the hands of kids," Gonsalves said. He added, "Nothing leaves the child's computer device. Everything is done locally. Nothing ever leaves. There's no login information. Nothing goes to the cloud, to third parties, or to us." What happens when harm occurs? Several speakers cautioned against treating child safety as a transactional compliance problem. Responding to comparisons between AI safety and financial infrastructure such as payments systems, Ganesh said, "Children's online harms are not a transaction between one account and another account. "It is inherently about behavioural, relational harms occurring in the real world," she said, adding that this makes children's digital safety "an order of complexity" that is difficult to address using existing compliance models. Others said harms facilitated by AI often spill into offline life and persist beyond the platform where they originate, limiting the effectiveness of general-purpose regulation under the IT Act and data protection law. N.S. Nappinai, senior advocate at the Supreme Court of India, said child safety frameworks must also account for harm caused by children themselves. "It's important to keep children safe, but the second part is keeping children or others safe from children too," she said. Nappinai said many instances of harassment or deepfake abuse in schools are dismissed as pranks despite constituting criminal offences. She stressed that minors are not outside the scope of the law and that juvenile justice mechanisms apply. On remedies, she advised schools and parents to pursue rapid takedowns through direct engagement with the police. "If you want speedy takedowns, go to a police station, sit there, and make the system work for you," Nappinai said. "Take my word for it. I've done it. It works." Taken together, the discussion exposed a core tension in India's AI governance: platforms continue to iterate on design safeguards while victims rely on ad hoc remedies, even as regulation remains largely reactive. Speakers argued that without child-specific obligations, impact assessments, and accountability for AI systems, harms to children will continue to be addressed only after they occur.
[2]
AI safety for youth: IEEE, UN experts ideate at India AI Impact Summit
India pushes trusted, safe AI frameworks for next generation The conversation around artificial intelligence safety couldn't arrive any quicker for young kids and adolescents with impressionable and developing minds. At the India AI Impact Summit 2026, a coalition spanning IEEE, UNESCO, OECD and India's own public technology institutions made one thing abundantly clear... Protecting young users from AI's unintended consequences is no longer a theoretical exercise. It is an urgent design and governance challenge unfolding in real time. Amir Banifatemi, AI Commons, set the tone with a stark assessment of the structural gap shaping AI safety today. "The problem is that we're facing a two-speed problem. On one side you have institutions and regulators coming up with frameworks based on principles... but at the same time we see an increasing growth of AI deployment at a very rapid scale. This creates a chasm because we don't sync them together, and policy may fall behind instead of being preventive about it." For Banifatemi, the solution lies in alignment: "So if we really want a framework of trust, we need to sync innovation and policy and build systems that are trustworthy, accountable and transparent." That trust deficit becomes sharper when young users enter the frame. Banifatemi pointed to a rapidly evolving threat landscape where misinformation and algorithmic manipulation are no longer edge cases. "When it comes to misinformation, manipulation or deepfakes, there are really three issues. People can misuse systems, models can make mistakes because of poor data or transparency, and we're entering a third wave where autonomous agents may change objectives." The result, he warned, is a deeper epistemic crisis. "So what is left for us humans -- how can we trust information anymore? The answer is to build frameworks of trust and accountability that keep pace with innovation." Also read: Yotta to Adani: India building sovereign, frontier AI with Global South relevance For policymakers and parents alike, the risks are already visible. Karine Perset, OECD, brought a personal lens to the debate, underscoring how quickly generative AI has outpaced digital literacy among younger users. "I see what my teenagers do with their social media and chatbots, and many things that I don't see. The things I do see are pretty scary because they show that they're not prepared, they're not equipped to deal with so much information." She added that younger users lack the contextual filters adults take for granted. "They have no way to navigate this world, and they're younger and more vulnerable. So they need environments where the trustworthiness of information is ensured," Karine highlighted. Yet the scale of the challenge makes isolated solutions ineffective. "The challenges are unprecedented in scale and complexity, and no single actor can address them alone," Perset said. "We need collaborative cross-disciplinary efforts that combine policy innovation with technical ingenuity. Policies and technical solutions must move hand in hand." At UNESCO, the focus is on the deeper cognitive and societal implications of AI-driven information systems. Mariagrazia Squicciarini, UNESCO, captured the disorientation of a synthetic media environment. "The inherent challenge of the AI era is the difficulty of distinguishing what is real from what is not. It's like walking in a dark room without being able to see anything -- yet we do this every day online." For younger users navigating hyper-personalised feeds, that confusion compounds quickly. "When this is coupled with the quantity of information and hyper-personalization, the risks multiply. Youth is a priority group for UNESCO because they are already at the center of this ecosystem." Also read: India AI Impact Summit 2026: How road safety AI, ADAS tech are reducing accidents Crucially, she argued that young users must be participants in shaping AI governance, not merely its subjects. "Young people were born digital and trust technologies because they see them as part of their lives. That is why they must not only be protected but included in shaping solutions." The stakes extend beyond literacy alone. "Education, cognitive skills and emotional skills are intertwined with AI literacy. There is nothing more dangerous than taking trustworthiness of information for granted," Squicciarini summed up. India's approach offers a parallel track focused on infrastructure-level trust. Mohammed Misbahuddin, C-DAC India, framed AI safety as an extension of existing digital trust frameworks. "Information integrity today is about knowing whether what we see is real or fake. With deepfakes and synthetic media, it is becoming harder to distinguish authenticity." Misbauhuddin pointed to India's experience with population-scale digital infrastructure as a template. "India has built trust layers like digital identity, digital signatures and UPI at population scale, showing that trust can be engineered into infrastructure. Exactly the same trust-based framework is required for AI." That framework is now being extended directly into education. "Safe and trusted AI is now a pillar under the IndiaAI Mission. We are introducing AI education for students from class 8 to 12 along with age-appropriate and trusted design principles," Misbahuddin said. "Youth AI must be built with trust and accountability from the start. That is critical for the next generation." But the long-term developmental effects of constant AI exposure remain uncertain. Yuko Harayama, RIKEN, warned that the world is effectively running a real-time experiment on children. "We don't yet know the long-term impact of using AI every day on children's development. Even very young children are already using these tools, and it changes how they interact and form values." Also read: Global AI commons: India's most ambitious tech diplomacy pitch yet Waiting for perfect data is not an option. "We need scientific evidence and collaboration across countries and cultures to understand these impacts. We don't have time to wait because they are growing up now." For standards bodies like IEEE, the gap between technological capability and governance maturity is now the central risk. Alpesh Shah, IEEE Standards Association, was blunt: "The problem hasn't been the technology -- it's everything else. Technology has outpaced how we think about governance and safety." That mismatch demands inclusive frameworks. "That's why inclusion is critical, especially including youth who understand these systems better than anyone. Age-appropriate design and global standards must work together to protect them," highlighted Shah. The solution, he argued, lies in collective action rather than institutional silos. "No one can do this alone because no single institution has the context for every problem. Multiple governance models and standards are required to address misinformation and protect young users at scale. Partnerships are the only way forward," summed up Shah. If there was a unifying message from the summit, it was that safeguarding the next generation in an AI-saturated world will require nothing less than synchronized global cooperation. Where standards bodies, governments, educators and industry come together to ensure that trust, once lost, is not the price of technological progress.
Share
Share
Copy Link
At the India AI Impact Summit in New Delhi, experts from IEEE, UNESCO, OECD, and Childlight revealed that India's AI policy framework lacks child-specific safeguards despite a 1,325% surge in AI-generated sexual abuse material. The summit brought together global institutions to address urgent gaps in AI governance, proposing measures like a Youth Safety Advisory Council and mandatory child rights impact assessments for high-interaction AI systems.
The India AI Impact Summit held in New Delhi exposed a critical weakness in India's emerging AI policy framework: the absence of child-specific safeguards despite mounting evidence of AI-enabled harm. Zoe Lambourne, chief operating officer at Childlight, delivered stark findings during a session titled "Safeguarding Children in India's AI Future: Towards Child-Centric AI Policy and Governance." She revealed that in 2024, over 300 million children worldwide became victims of technology-facilitated abuse or exploitation, with a staggering 1,325% increase in AI-generated sexual abuse material in just one year
1
. Lambourne warned that AI safety cannot stop at product design but must extend through monitoring, rapid response systems, child helplines, and survivor compensation.
Source: Digit
Experts at the summit repeatedly emphasized that while India has horizontal digital regulations like the Information Technology Rules and the Digital Personal Data Protection (DPDP) Act, these frameworks fail to address AI-mediated harms specifically targeting children
1
. Drawing from a Childlight poll of 410 young people across India, Lambourne noted that children see AI as powerful and beneficial, but not safe by default. Only one in four young Indians describe their online life as safe, with young women notably more likely to describe online spaces as unsafe and stressful. Nearly 48% of respondents place primary responsibility for online safety on technology companies, followed by parents and national governments1
.
Source: MediaNama
Amir Banifatemi from AI Commons identified a fundamental structural gap: "We're facing a two-speed problem. On one side you have institutions and regulators coming up with frameworks based on principles, but at the same time we see an increasing growth of AI deployment at a very rapid scale"
2
. This chasm between policy development and innovation deployment means regulatory measures fall behind instead of being preventive. Banifatemi pointed to three escalating issues: people misusing systems, models making mistakes due to poor data or lack of transparency, and a third wave where autonomous agents may change objectives, creating deeper epistemic crises around trust and information integrity2
.Karine Perset from OECD brought a personal perspective to AI safety for youth, observing how quickly generative AI has outpaced digital literacy among younger users. "I see what my teenagers do with their social media and chatbots, and many things that I don't see. The things I do see are pretty scary because they show that they're not prepared, they're not equipped to deal with so much information," she stated
2
. Younger users lack the contextual filters adults possess, making them particularly vulnerable to misinformation, deepfakes, and algorithmic manipulation. Perset emphasized that no single actor can address these challenges alone, requiring collaborative cross-disciplinary efforts combining policy innovation with technical solutions.Mariagrazia Squicciarini from UNESCO captured the disorientation of synthetic media environments: "The inherent challenge of the AI era is the difficulty of distinguishing what is real from what is not. It's like walking in a dark room without being able to see anything—yet we do this every day online"
2
. When coupled with hyper-personalization and information overload, risks multiply rapidly for young users. Squicciarini argued that young people must not only be protected but actively included in shaping AI governance solutions, noting that education, cognitive skills, and emotional development are intertwined with AI literacy. "There is nothing more dangerous than taking trustworthiness of information for granted," she emphasized2
.Related Stories
Gaurav Aggarwal of the iSPIRT Foundation, speaking as a volunteer chairing an Expert Engagement Group constituted by the Ministry of Electronics and Information Technology (MeitY), explained a deliberate reframing from "child safety" to "child wellbeing." Aggarwal argued that safety alone is too narrow and paternalistic, whereas wellbeing better reflects both risks and benefits AI creates for children
1
. He pointed to children in rural areas where AI tools expand access to education and learning opportunities otherwise unavailable, arguing that AI governance frameworks must account for positive use cases alongside harms.Chitra Iyer, co-founder of Space2Grow, outlined comprehensive recommendations focusing on building institutional and policy infrastructure around child wellbeing. Proposed measures include establishing a Child Safety Solutions Observatory to aggregate innovations and research, a Global South Working Group on child wellbeing to shape policy narratives rooted in contexts like India, and an innovation sandbox to test interventions against AI-enabled harms
1
. Critically, recommendations call for a Youth Safety Advisory Council to ensure meaningful participation of children in policy design, strengthening legal frameworks to explicitly address AI-generated child sexual abuse material, and mandating child rights impact assessments for high-interaction AI systems used by children. Greater investment in digital resilience and AI literacy also features prominently in the proposed accountability measures1
.Summarized by
Navi
29 Nov 2024•Policy and Regulation

29 Dec 2025•Entertainment and Society

18 Feb 2026•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
