3 Sources
[1]
Recommendations for studying the impact of AI on young people's mental health
Its recommendations are based on a critical appraisal of current shortcomings in the research on how digital technologies' impact young people's mental health, and an in-depth analysis of the challenges underlying those shortcomings. The paper, "From Social Media to Artificial Intelligence: Improving Research on Digital Harms in Youth,' published 21 January in The Lancet Child and Adolescent Health, calls for a "critical re-evaluation" of how we study the impact of internet-based technologies on young people's mental health, and outlines where future AI research can learn from several pitfalls of social media research. Existing limitations include inconsistent findings and a lack of longitudinal, causal studies. The analysis and recommendations by the Oxford researchers are divided into four sections: "Research on the effects of AI, as well as evidence for policymakers and advice for caregivers, must learn from the issues that have faced social media research," said Dr Karen Mansfield, postdoctoral researcher at the OII and lead author of the paper. "Young people are already adopting new ways of interacting with AI, and without a solid framework for collaboration between stakeholders, evidence-based policy on AI will lag behind, as it did for social media." The paper describes how the impact of social media is often interpreted as one isolated causal factor, which neglects different types of social media use, as well as contextual factors that influence both technology use and mental health. Without rethinking this approach, future research on AI risks getting caught up in a new media panic, as it did for social media. Other challenges include measures of social media use that are quickly outdated, and data that frequently excludes the most vulnerable young people. The authors propose that effective research on AI will ask questions that don't implicitly problematise AI, ensure causal designs, and prioritise the most relevant exposures and outcomes. The paper concludes that as young people adopt new ways of interacting with AI, research and evidence-based policy will struggle to keep up. However, by ensuring our approach to investigating the impact of AI on young people reflects the learnings of past research's shortcomings, we can more effectively regulate the integration of AI into online platforms, and how they are used. "We are calling for a collaborative evidence-based framework that will hold big tech firms accountable in a proactive, incremental, and informative way," said Professor Andrew Przybylski, OII Professor of Human Behaviour and Technology and contributing author to the paper. "Without building on past lessons, in ten years we could be back to square one, viewing the place of AI in much the same way we feel helpless about social media and smartphones. We have to take active steps now so that AI can be safe and beneficial for children and adolescents."
[2]
Oxford researchers call for framework to study AI's impact on youth mental health
University of OxfordJan 21 2025 A new peer-reviewed paper from experts at the Oxford Internet Institute, University of Oxford, highlights the need for a clear framework when it comes to AI research, given the rapid adoption of artificial intelligence by children and adolescents using digital devices to access the internet and social media. Its recommendations are based on a critical appraisal of current shortcomings in the research on how digital technologies' impact young people's mental health, and an in-depth analysis of the challenges underlying those shortcomings. The paper, "From Social Media to Artificial Intelligence: Improving Research on Digital Harms in Youth," published 21 January in The Lancet Child and Adolescent Health, calls for a "critical re-evaluation" of how we study the impact of internet-based technologies on young people's mental health, and outlines where future AI research can learn from several pitfalls of social media research. Existing limitations include inconsistent findings and a lack of longitudinal, causal studies. The analysis and recommendations by the Oxford researchers are divided into four sections: A brief review of recent research on the effects of technology on children's and adolescents' mental health, highlighting key limitations to the evidence.  An analysis of the challenges in the design and interpretation of research that they believe underlie these limitations.  Proposals for improving research methods to address these challenges, with a focus on how they can apply to the study of AI and children's wellbeing.  Concrete steps for collaboration between researchers, policymakers, big tech, caregivers and young people. "Research on the effects of AI, as well as evidence for policymakers and advice for caregivers, must learn from the issues that have faced social media research," said Dr. Karen Mansfield, postdoctoral researcher at the OII and lead author of the paper. "Young people are already adopting new ways of interacting with AI, and without a solid framework for collaboration between stakeholders, evidence-based policy on AI will lag behind, as it did for social media." The paper describes how the impact of social media is often interpreted as one isolated causal factor, which neglects different types of social media use, as well as contextual factors that influence both technology use and mental health. Without rethinking this approach, future research on AI risks getting caught up in a new media panic, as it did for social media. Other challenges include measures of social media use that are quickly outdated, and data that frequently excludes the most vulnerable young people. The authors propose that effective research on AI will ask questions that don't implicitly problematise AI, ensure causal designs, and prioritise the most relevant exposures and outcomes. The paper concludes that as young people adopt new ways of interacting with AI, research and evidence-based policy will struggle to keep up. However, by ensuring our approach to investigating the impact of AI on young people reflects the learnings of past research's shortcomings, we can more effectively regulate the integration of AI into online platforms, and how they are used. "We are calling for a collaborative evidence-based framework that will hold big tech firms accountable in a proactive, incremental, and informative way," said Professor Andrew Przybylski, OII Professor of Human Behaviour and Technology and contributing author to the paper. "Without building on past lessons, in ten years we could be back to square one, viewing the place of AI in much the same way we feel helpless about social media and smartphones. We have to take active steps now so that AI can be safe and beneficial for children and adolescents." University of Oxford Journal reference: DOI: https://www.thelancet.com/journals/lanchi/article/PIIS2352-4642(24)00332-8/fulltext
[3]
Experts call for clear framework to study AI's impact on youth mental health
A new paper from experts at the Oxford Internet Institute, University of Oxford, highlights the need for a clear framework when it comes to AI research, given the rapid adoption of artificial intelligence by children and adolescents using digital devices to access the internet and social media. Its recommendations are based on a critical appraisal of current shortcomings in the research on how digital technologies impact young people's mental health, and an in-depth analysis of the challenges underlying those shortcomings. The paper, "From Social Media to Artificial Intelligence: Improving Research on Digital Harms in Youth," published 21 January in The Lancet Child and Adolescent Health, calls for a "critical re-evaluation" of how we study the impact of internet-based technologies on young people's mental health, and outlines where future AI research can learn from several pitfalls of social media research. Existing limitations include inconsistent findings and a lack of longitudinal, causal studies. The analysis and recommendations by the Oxford researchers are divided into four sections: "Research on the effects of AI, as well as evidence for policymakers and advice for caregivers, must learn from the issues that have faced social media research," said Dr. Karen Mansfield, postdoctoral researcher at the OII and lead author of the paper. "Young people are already adopting new ways of interacting with AI, and without a solid framework for collaboration between stakeholders, evidence-based policy on AI will lag behind, as it did for social media." The paper describes how the impact of social media is often interpreted as one isolated causal factor, which neglects different types of social media use, as well as contextual factors that influence both technology use and mental health. Without rethinking this approach, future research on AI risks getting caught up in a new media panic, as it did for social media. Other challenges include measures of social media use that are quickly outdated, and data that frequently excludes the most vulnerable young people. The authors propose that effective research on AI will ask questions that don't implicitly problematize AI, ensure causal designs, and prioritize the most relevant exposures and outcomes. The paper concludes that as young people adopt new ways of interacting with AI, research and evidence-based policy will struggle to keep up. However, by ensuring our approach to investigating the impact of AI on young people reflects the learnings of past research's shortcomings, we can more effectively regulate the integration of AI into online platforms, and how they are used. "We are calling for a collaborative evidence-based framework that will hold big tech firms accountable in a proactive, incremental, and informative way," said Professor Andrew Przybylski, OII Professor of Human Behavior and Technology and contributing author to the paper. "Without building on past lessons, in ten years we could be back to square one, viewing the place of AI in much the same way we feel helpless about social media and smartphones. We have to take active steps now so that AI can be safe and beneficial for children and adolescents."
Share
Copy Link
Experts from the Oxford Internet Institute propose a critical re-evaluation of research methods to better understand how AI affects young people's mental health, drawing lessons from past shortcomings in social media studies.
A team of experts from the Oxford Internet Institute at the University of Oxford has published a groundbreaking paper calling for a critical re-evaluation of research methods used to study the impact of internet-based technologies, particularly artificial intelligence (AI), on young people's mental health. The paper, titled "From Social Media to Artificial Intelligence: Improving Research on Digital Harms in Youth," was published in The Lancet Child and Adolescent Health on January 21, 2025 1.
The researchers emphasize the importance of learning from the shortcomings of past social media research when studying AI's effects. Dr. Karen Mansfield, the lead author of the paper, states, "Young people are already adopting new ways of interacting with AI, and without a solid framework for collaboration between stakeholders, evidence-based policy on AI will lag behind, as it did for social media" 2.
The paper highlights several limitations in existing research, including:
To address these challenges, the Oxford team proposes a new approach to AI research that includes:
Professor Andrew Przybylski, a contributing author to the paper, emphasizes the need for proactive measures: "We are calling for a collaborative evidence-based framework that will hold big tech firms accountable in a proactive, incremental, and informative way" 3.
The paper's analysis and recommendations are divided into four key sections:
The researchers argue that by learning from past research shortcomings, we can more effectively regulate the integration of AI into online platforms and their usage. This approach aims to prevent a repeat of the "media panic" that occurred with social media and ensure that AI can be safe and beneficial for children and adolescents.
As AI continues to rapidly evolve and integrate into young people's lives, this call for a new research framework represents a crucial step towards understanding and mitigating potential negative impacts while harnessing the benefits of AI for youth mental health.
Summarized by
Navi
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
11 Sources
Business
13 hrs ago
11 Sources
Business
13 hrs ago
SoftBank makes a significant $2 billion investment in Intel, boosting the chipmaker's efforts to regain its competitive edge in the AI semiconductor market.
22 Sources
Business
22 hrs ago
22 Sources
Business
22 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
22 hrs ago
15 Sources
Technology
22 hrs ago
Microsoft introduces a new AI-powered 'COPILOT' function in Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
8 Sources
Technology
14 hrs ago
8 Sources
Technology
14 hrs ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
13 hrs ago
10 Sources
Technology
13 hrs ago