Curated by THEOUTPOST
On Fri, 6 Dec, 4:02 PM UTC
2 Sources
[1]
Google's New AI Models Can Gauge How You Feel, But Experts Warn Of Bias, Abuse, And Dystopian Risks: 'We Cannot Infer Emotions From Facial Features Alone' - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
Alphabet Inc.'s GOOG GOOGL Google has introduced its PaliGemma 2 AI models, which can apparently detect emotions from images. What Happened: In a blog post on Thursday, Google announced PaliGemma 2, an AI model that analyzes images to generate captions and answer questions about people in photos. "PaliGemma 2 generates detailed, contextually relevant captions for images, going beyond simple object identification to describe actions, emotions, and the overall narrative of the scene," the blog post read. The tech giant has conducted extensive testing to assess demographic biases in PaliGemma 2, but details on the benchmarks used were not fully disclosed. See Also: Tesla Vs. BYD: The EV Duel Heats Up However, several experts have expressed concerns about the implications of this technology. Sandra Wachter, a professor at the Oxford Internet Institute, told TechCrunch that assuming emotions can be read from facial features is problematic, likening it to seeking advice from a "Magic 8 Ball." Emotion detection has been a goal for many tech companies, but the science remains uncertain. Mike Cook, a research fellow at Queen Mary University, asserted that emotion detection is complex and not fully solvable. Heidy Khlaaf, chief AI scientist at the AI Now Institute, a nonprofit said, "AI aside, research has shown that we cannot infer emotions from facial features alone." Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: The introduction of Google's PaliGemma 2 models comes amid growing scrutiny of the company's AI technologies. Last month, Google's Gemini chatbot faced backlash after a user reported a hostile interaction. This incident followed another where a chatbot allegedly influenced a teenager's tragic decision, highlighting the need for careful oversight. However, Google's AI advancements continue to expand. Earlier this month, the search and advertisement giant launched the Veo video generator on its Cloud platform. This tool allows companies like Quora and Mondelez International to create content, showcasing the diverse applications of AI. In October, Alphabet reported a strong third-quarter performance, with a 15% increase in revenue. Price Action: At the time of writing, Alphabet's Class A stock dipped 0.18% in after-hours trading, settling at $172.33, while Class C stock slipped 0.25% to $173.88. Earlier in Thursday's regular trading session, Class A shares dropped 0.99% to close at $172.64, and Class C shares fell 1.01%, ending the day at $174.31, as per Benzinga Pro data. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Tesla CEO Elon Musk Agrees With Apple Co-Founder Steve Jobs On Guiding Talent: 'You Know Who The Best Managers Are?' Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[2]
Google says its new AI models can identify emotions -- and that has experts worried | TechCrunch
Google says its new AI model family has a curious feature: the ability to "identify" emotions. Announced on Thursday, the PaliGemma 2 family of models can analyze images, enabling the AI to generate captions and answer questions about people it "sees" in photos. "PaliGemma 2 generates detailed, contextually relevant captions for images," Google wrote in a blog post shared with TechCrunch, "going beyond simple object identification to describe actions, emotions, and the overall narrative of the scene." Emotion recognition doesn't work out of the box, and PaliGemma 2 has to be fine-tuned for the purpose. Nonetheless, experts TechCrunch spoke with were alarmed at the prospect of an openly available emotion detector. "This is very troubling to me," Sandra Wachter, a professor in data ethics and AI at the Oxford Internet Institute, told TechCrunch. "I find it problematic to assume that we can 'read' people's emotions. It's like asking a Magic 8 Ball for advice." For years, startups and tech giants alike have tried to build AI that can detect emotions for everything from sales training to preventing accidents. Some claim to have attained it, but the science stands on shaky empirical ground. The majority of emotion detectors take cues from the early work of Paul Ekman, a psychologist who theorized that humans share six fundamental emotions in common: anger, surprise, disgust, enjoyment, fear, and sadness. Subsequent studies cast doubt on Ekman's hypothesis, however, demonstrating there are major differences in the way people from different backgrounds express how they're feeling. "Emotion detection isn't possible in the general case, because people experience emotion in complex ways," Mike Cook, a research fellow at Queen Mary University specializing in AI, told TechCrunch. "Of course, we do think we can tell what other people are feeling by looking at them, and lots of people over the years have tried, too, like spy agencies or marketing companies. I'm sure it's absolutely possible to detect some generic signifiers in some cases, but it's not something we can ever fully 'solve.'" The unsurprising consequence is that emotion-detecting systems tend to be unreliable, and biased by the assumptions of their designers. In a 2020 MIT study, researchers showed that face-analyzing models could develop unintended preferences for certain expressions, like smiling. More recent work suggests that emotional analysis models assign more negative emotions to Black people's faces than white people's faces. Google says it conducted "extensive testing" to evaluate demographic biases in PaliGemma 2, and found "low levels of toxicity and profanity" compared to industry benchmarks. But the company didn't provide the full list of benchmarks it used, nor did it indicate which types of tests were performed. The only benchmark Google has disclosed is FairFace, a set of tens of thousands of people's headshots. The company claims that PaliGemma 2 scored well on FairFace. But some researchers have criticized the benchmark as a bias metric, noting that FairFace represents only a handful of race groups. "Interpreting emotions is quite a subjective matter that extends beyond use of visual aids, and is heavily embedded within a personal and cultural context," said Heidy Khlaaf, chief AI scientist at the AI Now Institute, a nonprofit that studies the societal implications of artificial intelligence. "AI aside, research has shown that we cannot infer emotions from facial features alone." Emotion detection systems have raised the ire of regulators overseas, who've sought to limit the use of the technology in high-risk contexts. The AI Act, the major piece of AI legislation in the EU, prohibits schools and employers from deploying emotion detectors (but not law enforcement agencies). The biggest apprehension around open models like PaliGemma 2, which is available from a number of hosts including AI dev platform Hugging Face, is that they'll be abused or misused, which could lead to real-world harm. "If this so-called 'emotional identification' is built on pseudoscientific presumptions, there are significant implications in how this capability may be used to further -- and falsely -- discriminate against marginalized groups such as in law enforcement, human resourcing, border governance, and so on," Khlaaf said. Asked about the dangers of publicly releasing PaliGemma 2, a Google spokesperson said the company stands behind its tests for "representational harms" as they relate to visual question answering and captioning. "We conducted robust evaluations of PaliGemma 2 models concerning ethics and safety, including child safety, content safety," they added. Watcher isn't convinced that's enough. "Responsible innovation means that you think about the consequences from the first day you step into your lab and continue to do so throughout the lifecycle of a product," she said. "I can think of myriad potential issues [with models like this] that can lead to a dystopian future, where your emotions determine if you get the job, a loan, and if you're admitted to uni."
Share
Share
Copy Link
Google's new PaliGemma 2 AI models, capable of analyzing images and potentially detecting emotions, have raised concerns among experts about bias, abuse, and ethical implications.
Google has introduced its latest AI model family, PaliGemma 2, which claims to analyze images, generate captions, and potentially identify emotions in photos. The tech giant announced this development in a blog post, highlighting the model's ability to go beyond simple object identification to describe actions, emotions, and overall scene narratives 1.
The announcement has sparked a debate among AI experts and ethicists who warn of potential risks associated with emotion detection technology. Sandra Wachter, a professor at the Oxford Internet Institute, likened the concept to "asking a Magic 8 Ball for advice," emphasizing the problematic nature of assuming emotions can be accurately read from facial features 2.
Mike Cook, a research fellow at Queen Mary University, pointed out the complexity of emotion detection, stating that it's not something that can be fully "solved" due to the intricate ways people experience emotions [2].
The scientific basis for emotion detection through AI remains uncertain. Heidy Khlaaf, chief AI scientist at the AI Now Institute, emphasized that research has shown emotions cannot be inferred from facial features alone [1]. This uncertainty raises concerns about the potential for bias and misuse of the technology.
Google claims to have conducted extensive testing to evaluate demographic biases in PaliGemma 2, reporting "low levels of toxicity and profanity" compared to industry benchmarks. However, the company has not fully disclosed the details of these benchmarks or the types of tests performed [2].
While Google showcases the diverse applications of AI, including the recent launch of the Veo video generator on its Cloud platform [1], experts worry about the potential misuse of emotion detection technology. Concerns range from employment discrimination to invasive marketing practices and law enforcement applications [2].
The development of emotion detection AI has caught the attention of regulators, particularly in the EU. The proposed AI Act in the European Union aims to prohibit the use of emotion detectors in high-risk contexts such as schools and workplaces, although law enforcement agencies may be exempt from these restrictions [2].
In response to concerns, a Google spokesperson stated that the company stands behind its tests for "representational harms" related to visual question answering and captioning. They also mentioned conducting robust evaluations concerning ethics, safety, and content safety [2].
As AI continues to advance, the debate surrounding emotion detection technology highlights the need for careful consideration of ethical implications, potential biases, and the responsible development and deployment of AI models in society.
Reference
The integration of AI in smartphones is sparking both excitement and concern. While it promises enhanced capabilities, it also raises questions about privacy, job displacement, and the future of human-technology interaction.
2 Sources
OpenAI expresses concerns about users forming unintended social bonds with ChatGPT's new voice feature. The company is taking precautions to mitigate risks associated with emotional dependence on AI.
10 Sources
Google's Q2 2024 earnings call leaves investors unconvinced about its AI strategy. Despite strong financial performance, questions remain about the company's AI integration and future plans.
6 Sources
Google's new Pixel Studio app, featuring AI-powered image editing and creation tools, launches with mixed reception. Users report impressive capabilities alongside concerning issues.
3 Sources
Google has introduced PaliGemma 2, an advanced family of vision-language AI models built on the Gemma 2 architecture. These open-source models offer improved capabilities in visual understanding and task transfer across various domains.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved