2 Sources
[1]
AI 'wingman' app leaks 160,000 screenshots of private chats - here's what we know
It's hard to imagine a more mortifyingly embarrassing scenario than your own private flirty chats being exposed online, except, perhaps, being caught sending these messages off for analysis by an AI app. Researchers at Cybernews have discovered a breach at "FlirtAI - Get Rizz & Dates" (yes, that is really what it's called) which has leaked over 160,000 chat screenshots from users through an unprotected cloud storage bucket. Users of this app feed screenshots of their private conversations into the application to get tailored responses designed to help the user flirt or escalate the conversation. Unsurprisingly, but worryingly nonetheless, this app seems to have been primarily used by teenagers. Because of the configuration of the app, those primarily at risk are not those who have sent the chats in, but the person they're talking to - presumably other teenagers who are completely unaware that their conversation has been leaked, and probably unaware that this app even exists. Whilst we've seen more dangerous personal data leaked by other AI chatbots like SSNs and financial information, the nature of this chatbot and its user base represents a different kind of harm. As an adult, I'm not sure how well I'd cope with my private chats being exposed online, so for an already vulnerable teenager this could be devastating. "The fact that teenagers used this app may increase the severity of a potential data breach as data from minors is considered more sensitive, and could be subject to more restrictions regarding potential data uses and collection and processing practices," Cybernews researchers confirmed. The app does state that users are "only allowed to upload a screenshot when you have obtained the necessary approvals from all users/humans and their information mentioned in the screenshot". But, since this would negate the point of the chatbot, it seems pretty unlikely that this is followed. Those exposed in this breach could be at a heightened risk of social engineering attacks like phishing or, given that the app encourages users to share their target's dating profile, there could be a risk of impersonation attacks.
[2]
Flirty AI chatbot app leaks 160,000 DM screenshots
For years, some daters have used chatbots to flirt for them. Now, one of these "wingman" apps has leaked hundreds of thousands of messages. The makers of FlirtAI, which promotes itself as the "#1 AI Flirt Assistant Keyboard" on the App Store, have leaked 160,000 screenshots that users have shared with the app, according to an investigation by Cybernews. FlirtAI promises to help craft "charming, personalized, and instant" messages to dating app matches, its App Store description says. On the App Store, FlirtAI says it "works with every dating and chat app," and lists many of the most popular of each, including Tinder, Bumble, Hinge, WhatsApp, and Instagram. Users upload screenshots of their private dating app conversations, and FlirtAI generates responses. FlirtAI's privacy policy mentions this method: "When you use our service, we may collect certain information from you, including the prompts and text detected inside the uploaded screenshots." It also states that uploading screenshots implies that everyone involved has consented to the use of FlirtAI. The Cybernews team found an unprotected Google Cloud Storage bucket owned by Buddy Network GmbH, the company behind FlirtAI, containing such screenshots of conversations and dating app profiles. After the team notified the company, it closed the exposed bucket. Cybernews said that the leak data appeared to contain screenshots from a large number of teenage users. According to initial research out of MIT, using ChatGPT to write for you can impair cognitive ability. And while FlirtAI isn't an AI companion, researchers at Common Sense Media say that AI companions aren't safe for teens, because they can become emotionally dependent on them. One section of FlirtAI's privacy policy states that minors may not use the app, while another states that teens over 13 can use it with permission from a parent or guardian. Buddy Network GmbH has also created an app to talk to an "angel" AI and a "90-second AI journal" app. Mashable has contacted the company.
Share
Copy Link
An AI-powered dating advice app called FlirtAI has exposed over 160,000 private chat screenshots due to an unsecured cloud storage bucket, potentially compromising user privacy and raising concerns about data security in AI-assisted communication tools.
In a concerning development for AI-assisted communication tools, the dating advice app "FlirtAI" has been found to have leaked over 160,000 screenshots of private chats. Researchers at Cybernews discovered an unprotected Google Cloud Storage bucket owned by Buddy Network GmbH, the company behind FlirtAI, containing these sensitive user data 1.
FlirtAI, which markets itself as the "#1 AI Flirt Assistant Keyboard" on the App Store, is designed to help users craft "charming, personalized, and instant" messages for their dating app matches. The app encourages users to upload screenshots of their private conversations from various dating and chat platforms, including popular services like Tinder, Bumble, Hinge, WhatsApp, and Instagram 2.
Source: Mashable
The app's privacy policy states that by uploading screenshots, users imply that all parties involved have consented to the use of FlirtAI. However, this assumption seems unrealistic, as it would negate the purpose of using an AI assistant for private conversations. The policy also mentions that the app collects "prompts and text detected inside the uploaded screenshots" 2.
The data breach exposes users to various risks:
Source: TechRadar
Cybernews researchers noted that a large number of the leaked screenshots appeared to be from teenage users. This raises additional concerns, as data from minors is considered more sensitive and subject to stricter regulations regarding collection and processing practices 1.
FlirtAI's privacy policy contains contradictory information about age restrictions. One section states that minors may not use the app, while another allows teens over 13 to use it with parental permission 2. This inconsistency adds to the concerns about the app's data handling practices and user protection measures.
This incident highlights the potential risks associated with AI-powered communication tools. Researchers at Common Sense Media have warned that AI companions may not be safe for teens, as they can lead to emotional dependency. Additionally, studies from MIT suggest that relying on AI for writing tasks could potentially impair cognitive abilities 2.
After being notified by the Cybernews team, Buddy Network GmbH closed the exposed storage bucket. However, the incident raises questions about the company's data security practices and the need for stricter regulations in the AI-assisted communication sector 12.
As AI continues to play a larger role in our daily communications, this leak serves as a stark reminder of the importance of robust data protection measures and the potential consequences of entrusting sensitive information to AI-powered applications.
Google launches its new Pixel 10 smartphone series, showcasing advanced AI capabilities powered by Gemini, aiming to challenge competitors in the premium handset market.
20 Sources
Technology
3 hrs ago
20 Sources
Technology
3 hrs ago
Google's Pixel 10 series introduces groundbreaking AI features, including Magic Cue, Camera Coach, and Voice Translate, powered by the new Tensor G5 chip and Gemini Nano model.
12 Sources
Technology
4 hrs ago
12 Sources
Technology
4 hrs ago
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather with improved accuracy, potentially helping to protect Earth's infrastructure from solar storm damage.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
Google's latest smartwatch, the Pixel Watch 4, introduces significant upgrades including a curved display, enhanced AI features, and improved health tracking capabilities.
17 Sources
Technology
3 hrs ago
17 Sources
Technology
3 hrs ago
FieldAI, a robotics startup, has raised $405 million to develop "foundational embodied AI models" for various robot types. The company's innovative approach integrates physics principles into AI, enabling safer and more adaptable robot operations across diverse environments.
7 Sources
Technology
3 hrs ago
7 Sources
Technology
3 hrs ago