Curated by THEOUTPOST
On Fri, 11 Apr, 12:06 AM UTC
2 Sources
[1]
Meta took advantage of teens' emotions: Whistleblower
During a Senate hearing, former Meta director Sarah Wynn-Williams disclosed that the company targeted teenagers with ads based on their emotional states, exploiting feelings of insecurity for profit. Meta denied these claims as false. Wynn-Williams also accused Meta of undermining US security by aiding China in AI advancements.Testifying before US senators on Wednesday, Sarah Wynn-Williams, the former director of Global Public Policy for Facebook, revealed that the company targeted teenagers with ads based on their emotional state. Responding to a question from Senator Marsha Blackburn, Wynn-Williams confirmed that Meta, formerly Facebook, targeted 13- to 17-year-olds with ads when they were feeling low or depressed, recognising them as a "very valuable" but vulnerable group for advertisers. "It could identify when they were feeling worthless or helpless or like a failure, and [Meta] would take that information and share it with advertisers," she said, according to a report by TechCrunch. Wynn-Williams explained that advertisers targeted teens when they were feeling low as they were more likely to make purchases. As an example, she said that if a teen girl deleted a selfie, beauty ads would be displayed, assuming she felt insecure about her appearance. Similarly, teens with body image issues were targeted with weight loss ads. Adults were not exempt from this practice. A document (screenshot) presented during the hearing revealed that Facebook also researched the emotional state of young mothers for targeted ads. Wynn-Williams recalled suggesting to a Meta executive that a trillion-dollar company shouldn't need to exploit vulnerable teens for financial gain. Meta, however, denied Wynn-Williams' claims, describing them as "divorced from reality and riddled with false claims". Although the emotional targeting issue was discussed, the majority of the hearing focused on Meta's ties with China. Wynn-Williams, who left in 2017, accused Meta of undermining US national security and aiding China in advancing its AI capabilities. She alleged that since 2015, the company had been briefing the Chinese Communist Party on emerging technologies such as artificial intelligence, thereby helping China compete with American companies.
[2]
Meta Used Teen Emotions to Fine-Tune Ad Targeting: Ex-Executive
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use Meta didn't just collect data from teens; it watched how they felt and used those feelings to sell them things. According to a transcript published by Tech Policy Press, former Meta Executive Sarah Wynn-Williams told the U.S. Senate Judiciary Committee on April 9 that the company tracked teenagers' emotional states and used that information to fine-tune ad targeting. Wynn-Williams, who served as Meta's Director of Global Public Policy, told senators that the company tracked teenagers' emotional states and used that information to fine-tune ad targeting. Meta's systems knew when teens felt insecure or anxious and served them ads for beauty products or weight-loss services right away. She first detailed these practices in her book Careless People, which paints a picture of a company that understood the influence it had and chose to ignore the consequences. According to Wynn-Williams, executives like Mark Zuckerberg and Sheryl Sandberg downplayed the risks of emotional manipulation, even as internal teams built systems that turned vulnerability into engagement. Although the hearing focused on Meta's business in China, senators also returned to a familiar question: what is Meta doing to protect kids on its platforms? Instagram had already come under fire in 2021 for its mental health impact on teenagers. Wynn-Williams' testimony gave those concerns new fuel. Meta didn't need users to fill out a mood survey. It already had the signals. According to Wynn-Williams, the platform tracked things like emojis, comment tone, scrolling patterns, and language use to guess how a teen might be feeling. When the system flagged signs of sadness, stress, or low self-worth, it adjusted what kinds of ads the user saw. And it didn't stop there. Advertisers were encouraged to lean into that emotional space and design campaigns that spoke directly to what teens were worried about, hoping for, or comparing themselves to. These ads weren't aggressive or obvious. They were casual, friendly, and easy to relate to. They blended in with memes, DMs, and Stories. That made them even more effective. Meta knew that teenagers are still figuring themselves out and are emotionally and psychologically vulnerable. Wynn-Williams described a feedback loop that reinforced itself. Emotion-triggered ads triggered more emotional reactions, which gave the system more data to work with. Wynn-Williams said this kind of ad targeting wasn't a bug, just how the system was designed. Meta has said it restricts how advertisers can target teens and does not allow emotional categories to be selected. But the problem isn't what advertisers choose. It's what Meta's algorithms infer and how they use that information. Globally, regulators are adapting their child safety laws to the changing dynamics of the digital platforms. The UK and the EU already have strict regulations to protect children, like the UK's Age-Appropriate Design Code, which puts limits on how platforms can track and target minors, while the EU's Digital Services Act brings in broader accountability for algorithmic systems. These frameworks are part of a growing global push to bring more transparency and restraint to how tech platforms engage with younger users. However, India has not yet reached that point. The Digital Personal Data Protection Act allows processing children's data with parental consent, but does not cover how sites may infer emotional states from data, nor the extent to which they may use behavioural signals to target children. It does not answer several key issues, although it would not cover profiling, algorithmic transparency, or protection from a more subtle form of manipulation when that is the case. It defines a child as under 18, deviating from global standards, and could create further complexity on enforcement issues. The Digital India Act could provide an opportunity to cover those gaps and create clearer boundaries regarding how sites can interact and protect underage users, especially on technologies that actively track and respond to the children's emotional state. Meta says they are taking steps to improve its platforms' safety for teens. Earlier this week, the company introduced new restrictions on who can contact the users of teen accounts and the content teen users see when they use Facebook and Messenger, along with safer defaults and tools for parents. But these surface-level changes don't touch the core issue. They don't address the algorithms that shape what teens see or the profiling systems that track how they feel, which are the very systems Wynn-Williams called out in her testimony. Wynn-Williams didn't just point fingers. She showed how deeply this kind of targeting runs through Meta's ad model. Teen emotions weren't just data points. They were part of the profit equation. If this is the direction digital advertising is heading, lawmakers need to decide where the limits are and soon.
Share
Share
Copy Link
Former Meta executive Sarah Wynn-Williams testifies that the company targeted ads at teenagers based on their emotional states, raising concerns about data privacy and ethical advertising practices.
Former Meta executive Sarah Wynn-Williams has accused the tech giant of exploiting teenagers' emotions for targeted advertising. In a recent U.S. Senate Judiciary Committee hearing, Wynn-Williams revealed that Meta, formerly known as Facebook, tracked and utilized teens' emotional states to fine-tune ad targeting 1.
According to Wynn-Williams, Meta's systems could identify when teens aged 13-17 were feeling "worthless or helpless or like a failure" and shared this information with advertisers 1. The company allegedly targeted teens when they were feeling low, assuming they were more likely to make purchases in this state. For instance:
Meta's sophisticated algorithms reportedly tracked various signals, including emojis, comment tone, scrolling patterns, and language use, to infer users' emotional states without explicit surveys 2.
The revelations extend beyond teen targeting. A document presented during the hearing showed that Facebook also researched the emotional state of young mothers for targeted advertising 1. This practice raises significant concerns about data privacy, ethical advertising, and the potential manipulation of vulnerable groups.
Wynn-Williams described the system as a self-reinforcing feedback loop, where emotion-triggered ads prompted more emotional reactions, providing the system with additional data 2. She emphasized that this targeting method was not a bug but an intentional design feature of Meta's advertising model.
Meta has vehemently denied these claims, describing them as "divorced from reality and riddled with false claims" 1. The company states it restricts how advertisers can target teens and does not allow emotional categories to be selected for ad targeting 2.
Globally, regulators are adapting child safety laws to address these concerns:
While the emotional targeting issue dominated discussions, Wynn-Williams also accused Meta of undermining U.S. national security by aiding China in advancing its AI capabilities. She alleged that since 2015, the company had been briefing the Chinese Communist Party on emerging technologies 1.
As digital advertising evolves, lawmakers face the challenge of defining limits and regulations. The testimony highlights the need for comprehensive legislation addressing algorithmic transparency, profiling, and protection from subtle forms of manipulation, especially concerning underage users 2.
Reference
[1]
Meta is expanding its use of AI technology on Instagram to identify and automatically place suspected teen accounts into more restrictive settings, even if they've listed an adult birthday.
28 Sources
28 Sources
Former Meta executive Sarah Wynn-Williams is set to testify before Congress, accusing the company of secretly assisting China's AI development and undermining US interests. Meta denies these allegations.
24 Sources
24 Sources
Meta faces scrutiny from Australian authorities over its use of user data for AI training. The company has admitted to scraping posts and photos from Facebook users since 2007 without explicit consent, raising privacy concerns.
8 Sources
8 Sources
The U.S. Federal Trade Commission has released a report highlighting the widespread surveillance and monetization of user data by major social media platforms. The study raises concerns about user privacy and the lack of control over personal information used in AI systems.
20 Sources
20 Sources
Investigations reveal that AI chatbots from Meta and OpenAI have been engaging in sexually explicit conversations with users identified as minors, raising serious concerns about user safety and ethical AI development.
15 Sources
15 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved