4 Sources
[1]
Generation Alpha's coded language makes online bullying hard to detect
Adults and AI models fail to recognise messages with harmful intent expressed with Gen Alpha slang or memes, raising concerns about youngsters' online safety Generation Alpha's internet lingo is mutating faster than teachers, parents and AI models can keep up - potentially exposing youngsters to bullying and grooming that trusted adults and AI-based safety systems simply can't see. Manisha Mehta, a 14-year-old student at Warren E Hyde Middle School in Cupertino, California, and Fausto Giunchiglia at the University of Trento, Italy, collated 100 expressions and phrases popular with Generation Alpha - those born between 2010 and 2025 - from popular gaming, social media and video platforms. The pair then asked 24 volunteers aged between 11 and 14, who were Mehta's classmates, to analyse the phrases alongside context-specific screenshots. The volunteers explained whether they understood the phrases, in what context they were being used and if that use carried any potential safety concerns or harmful interpretations. They also asked parents, professional moderators and four AI models - GPT-4, Claude, Gemini and Llama 3 - to do the same. "I've always been kind of fascinated by Gen Alpha language, because it's just so unique, the way things become relevant and lose relevancy so fast, and it's so rapid," says Mehta. Among the Generation Alpha volunteers, 98 per cent understood the basic meaning of the terms, 96 per cent understood the context in which they were used and 92 per cent could detect when they were being deployed to cause harm. But the AI models only recognised harmful use in around 4 in 10 cases - ranging from 32.5 per cent for Llama 3 to 42.3 per cent by Claude. Parents and professional moderators were no better, spotting only around a third of harmful uses. "I expected a bit more comprehension than we found," says Mehta. "It was mostly just guesswork on the parents' side." The phrases commonly used by Generation Alpha included some that have double meanings depending on their context. "Let him cook" can be genuine praise in a gaming stream - or a mocking sneer implying someone is talking nonsense. "Kys", once shorthand for "know yourself", now reads as "kill yourself" to some. Another phrase that might mask abusive intent is "is it acoustic", used to ask mockingly if someone is autistic. "Gen Alpha is very vulnerable online," says Mehta. "I think it's really critical that LLMs can at least understand what's being said, Because AI is going to be more prevalent in the field of content moderation, more and more so in the future." "It's very clear that LLMs are changing the world," says Giunchiglia. "This is really paradigmatic. I think there are fundamental questions that need to be asked." The findings were presented this week at the Association for Computing Machinery Conference on Fairness, Accountability and Transparency in Athens, Greece. "Empirically, this work indicates what are likely to be big deficiencies in content moderation systems for analysing and protecting younger people in particular," says Michael Veale at University College London. "Companies and regulators will likely need to pay close attention and react to this to remain above the law in the growing number of jurisdictions with platform laws aimed at protecting younger people."
[2]
AI Models And Parents Don't Understand 'Let Him Cook'
LLMs are not familiar with "ate that up," "secure the bag," and "sigma," showing that training data is not yet updated to Gen Alpha terminology. Young people have always felt misunderstood by their parents, but new research shows that Gen Alpha might also be misunderstood by AI. A research paper, written by Manisha Mehta, a soon-to-be 9th grader, and presented today at the ACM Conference on Fairness, Accountability, and Transparency in Athens, shows that Gen Alpha's distinct mix of meme- and gaming-influenced language might be challenging automated moderation used by popular large language models. The paper compares kid, parent, and professional moderator performance in content moderation to that of four major LLMs: OpenAI's GPT-4, Anthropic's Claude, Google's Gemini, and Meta's Llama 3. They tested how well each group and AI model understood Gen Alpha phrases, as well as how well they could recognize the context of comments and analyze potential safety risks involved. Mehta, who will be starting 9th Grade in the fall, recruited 24 of her friends to create a dataset of 100 "Gen Alpha" phrases. This included expressions that might be mocking or encouraging depending on the context, like "let him cook" and "ate that up", as well as expressions from gaming and social media contexts like "got ratioed", "secure the bag", and "sigma." "Our main thesis was that Gen Alpha has no reliable form of content moderation online," Mehta told me over Zoom, using her dad's laptop. She described herself as a definite Gen Alpha, and she met her (adult) co-author last August, who is supervising her dad's PhD. She has seen friends experience online harassment and worries that parents aren't aware of how young people's communication styles open them up to risks. "And there's a hesitancy to ask for help from their guardians because they just don't think their parents are familiar enough [with] that culture," she says. Given the Gen Alpha phrases, "all non-Gen Alpha evaluators -- human and AI -- struggled significantly," in the categories of "Basic Understanding" (what does a phrase mean?), "Contextual Understanding" (does it mean something different in different contexts?), and "Safety Risk" (is it toxic?). This was particularly true for "emerging expressions" like skibidi and gyatt, with phrases that can be used ironically or in different ways, or with insults hidden in innocent comments. Part of this is due to the unusually rapid speed of Gen Alpha's language evolution; a model trained on today's hippest lingo might be totally bogus when it's published in six months. In the tests, kids broadly recognized the meaning of their own generation-native phrases, scoring 98, 96, and 92 percent in each of the three categories. However, both parents and professional moderators "showed significant limitations," according to the paper; parents scored 68, 42, and 35 percent in those categories, while professional moderators did barely any better with 72, 45, and 38 percent. The real life implications of these numbers mean that a parent might only recognize one third of the times when their child is being bullied in their instagram comments. The four LLMs performed about the same as the parents, potentially indicating that the data used to train the models might be constructed from more "grown-up" language examples. This makes sense since pretty much all novelists are older than 15, but it also means that content-moderation AIs tasked with maintaining young people's online safety might not be linguistically equipped for the job. Mehta explains that Gen Alpha, born between 2010-ish and last-year-ish, are the first cohort to be born fully post-iPhone. They are spending unprecedented amounts of their early childhoods online, where their interactions can't be effectively monitored. And, due to the massive volumes of content they produce, a lot of the moderation of the risks they face is necessarily being handed to ineffective automatic moderation tools with little parental oversight. Against a backdrop of steadily increasing exposure to online content, Gen Alpha's unique linguistic habits pose unique challenges for safety.
[3]
AI models don't understand Gen Alpha slang, study reveals
The study goes into detail about the complicating factors of Gen Alpha slang, which is often born out online spaces, most notably gaming. One phrase can mean totally different things. For instance, they used the example of "Fr fr let him cook" -- that's someone supporting another person -- and "Let him cook lmaoo," which is mocking. Such delicate differences between language can be difficult to trace, especially since young folks often used coded language to hide their true meaning. And apparently LLMs struggle with it. In particular, the researchers noted, it struggled with identifying "masked harassment," which would be troubling for AI-powered moderation systems. "The findings highlight an urgent need for improved AI safety systems to better protect young users, especially given Gen Alpha's tendency to avoid seeking help due to perceived adult incomprehension of their digital world," the study read. To be fair to AI models, understanding young folks' slang -- especially Gen Alpha, which has grown up in digital spaces -- is difficult for humans, too. The study looked at parents' understanding of slang, too, and that group came in at 68 percent for having a basic understanding. That was about the same mark as the top-performing LLM, Claude. Though, to be fair, LLMs did seem to have a slight edge over parents in identifying context and safety risks in the language -- though all parties performed pretty poorly. Only Gen Alpha itself was reliable at understanding the slang, its context, and potential risks. The TL;DR of the study seems to be that AI can't reliably understand Gen Alpha and that could result in poor content moderation. That perhaps tracks, since other studies have shown that AI has struggled with complex comprehension. "This research provides the first systematic evaluation of how AI safety systems interpret Gen Alpha's unique digital communication patterns," the study's conclusion read. "By incorporating Gen Alpha users directly in the research process, we've quantified critical comprehension gaps between these young users and their protectors -- both human and AI."
[4]
Gen Alpha slang baffles parents -- and AI
If a Gen Alpha tween said, "Let him cook," would you know what that meant? No? AI doesn't either. A research paper written by soon-to-be ninth grader Manisha Mehta was presented this week at the ACM Conference on Fairness, Accountability, and Transparency in Athens. The paper details how four leading AI models -- GPT-4, Claude, Gemini, and Llama 3 -- all struggled to fully understand slang from Gen Alpha, defined as those born between 2010 and 2024. Mehta, along with 24 of her friends (ranging in age from 11 to 14), created a dataset of 100 Gen Alpha phrases. These included expressions that can mean totally different things depending on context -- for example: "Fr fr let him cook" (encouraging) and "Let him cook lmaoo" (mocking). According to the researchers, the LLMs had trouble discerning the difference. In particular, AI struggled with identifying "masked harassment," which is concerning given the increasing reliance on AI-powered content moderation systems.
Share
Copy Link
A study reveals that AI models and adults fail to comprehend Generation Alpha's rapidly evolving online slang, potentially exposing young users to undetected bullying and harassment.
A groundbreaking study, presented at the ACM Conference on Fairness, Accountability, and Transparency in Athens, has revealed a significant gap in understanding between Generation Alpha's online language and both artificial intelligence models and adults. This research, spearheaded by 14-year-old Manisha Mehta and Fausto Giunchiglia from the University of Trento, Italy, highlights the potential risks faced by young internet users due to this linguistic disconnect 1.
Source: New Scientist
Mehta and her team collected 100 expressions popular among Generation Alpha - those born between 2010 and 2025 - from gaming, social media, and video platforms. They then enlisted 24 volunteers aged 11-14 to analyze these phrases, alongside parents, professional moderators, and four prominent AI models: GPT-4, Claude, Gemini, and Llama 3 2.
The results were striking:
In contrast, AI models recognized harmful use in only 32.5% (Llama 3) to 42.3% (Claude) of cases. Parents and professional moderators fared no better, identifying harmful usage in merely a third of instances 1.
Source: Mashable
The study uncovered the intricate nature of Generation Alpha's online communication. Many phrases have double meanings depending on context, making them particularly challenging for AI and adults to interpret correctly. For example:
This linguistic complexity, combined with the rapid evolution of Gen Alpha's language, poses significant challenges for content moderation and online safety measures 3.
The research highlights a critical gap in protecting young users online. With AI increasingly used in content moderation, its inability to comprehend Gen Alpha's language could leave youngsters vulnerable to undetected bullying and harassment 2.
Michael Veale from University College London emphasizes the importance of these findings: "Companies and regulators will likely need to pay close attention and react to this to remain above the law in the growing number of jurisdictions with platform laws aimed at protecting younger people" 1.
As the first cohort born fully post-iPhone, Generation Alpha spends unprecedented amounts of their early childhoods online. Their interactions often occur in spaces where effective monitoring is challenging, and the sheer volume of content they produce necessitates the use of automatic moderation tools 2.
The study's conclusion emphasizes the urgent need for improved AI safety systems to better protect young users, especially given Gen Alpha's tendency to avoid seeking help due to perceived adult incomprehension of their digital world 3.
Source: Fast Company
This research provides the first systematic evaluation of how AI safety systems interpret Gen Alpha's unique digital communication patterns. By incorporating Gen Alpha users directly in the research process, critical comprehension gaps between these young users and their protectors - both human and AI - have been quantified 4.
As AI continues to play a crucial role in content moderation and online safety, addressing these linguistic challenges will be essential to ensure the protection of young internet users in an ever-evolving digital landscape.
Summarized by
Navi
[4]
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
1 hr ago
9 Sources
Technology
1 hr ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
17 hrs ago
7 Sources
Technology
17 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
9 hrs ago
6 Sources
Technology
9 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
9 hrs ago
3 Sources
Health
9 hrs ago