2 Sources
[1]
AI 'wingman' app leaks 160,000 screenshots of private chats - here's what we know
It's hard to imagine a more mortifyingly embarrassing scenario than your own private flirty chats being exposed online, except, perhaps, being caught sending these messages off for analysis by an AI app. Researchers at Cybernews have discovered a breach at "FlirtAI - Get Rizz & Dates" (yes, that is really what it's called) which has leaked over 160,000 chat screenshots from users through an unprotected cloud storage bucket. Users of this app feed screenshots of their private conversations into the application to get tailored responses designed to help the user flirt or escalate the conversation. Unsurprisingly, but worryingly nonetheless, this app seems to have been primarily used by teenagers. Because of the configuration of the app, those primarily at risk are not those who have sent the chats in, but the person they're talking to - presumably other teenagers who are completely unaware that their conversation has been leaked, and probably unaware that this app even exists. Whilst we've seen more dangerous personal data leaked by other AI chatbots like SSNs and financial information, the nature of this chatbot and its user base represents a different kind of harm. As an adult, I'm not sure how well I'd cope with my private chats being exposed online, so for an already vulnerable teenager this could be devastating. "The fact that teenagers used this app may increase the severity of a potential data breach as data from minors is considered more sensitive, and could be subject to more restrictions regarding potential data uses and collection and processing practices," Cybernews researchers confirmed. The app does state that users are "only allowed to upload a screenshot when you have obtained the necessary approvals from all users/humans and their information mentioned in the screenshot". But, since this would negate the point of the chatbot, it seems pretty unlikely that this is followed. Those exposed in this breach could be at a heightened risk of social engineering attacks like phishing or, given that the app encourages users to share their target's dating profile, there could be a risk of impersonation attacks.
[2]
Flirty AI chatbot app leaks 160,000 DM screenshots
For years, some daters have used chatbots to flirt for them. Now, one of these "wingman" apps has leaked hundreds of thousands of messages. The makers of FlirtAI, which promotes itself as the "#1 AI Flirt Assistant Keyboard" on the App Store, have leaked 160,000 screenshots that users have shared with the app, according to an investigation by Cybernews. FlirtAI promises to help craft "charming, personalized, and instant" messages to dating app matches, its App Store description says. On the App Store, FlirtAI says it "works with every dating and chat app," and lists many of the most popular of each, including Tinder, Bumble, Hinge, WhatsApp, and Instagram. Users upload screenshots of their private dating app conversations, and FlirtAI generates responses. FlirtAI's privacy policy mentions this method: "When you use our service, we may collect certain information from you, including the prompts and text detected inside the uploaded screenshots." It also states that uploading screenshots implies that everyone involved has consented to the use of FlirtAI. The Cybernews team found an unprotected Google Cloud Storage bucket owned by Buddy Network GmbH, the company behind FlirtAI, containing such screenshots of conversations and dating app profiles. After the team notified the company, it closed the exposed bucket. Cybernews said that the leak data appeared to contain screenshots from a large number of teenage users. According to initial research out of MIT, using ChatGPT to write for you can impair cognitive ability. And while FlirtAI isn't an AI companion, researchers at Common Sense Media say that AI companions aren't safe for teens, because they can become emotionally dependent on them. One section of FlirtAI's privacy policy states that minors may not use the app, while another states that teens over 13 can use it with permission from a parent or guardian. Buddy Network GmbH has also created an app to talk to an "angel" AI and a "90-second AI journal" app. Mashable has contacted the company.
Share
Copy Link
An AI-powered dating advice app called FlirtAI has exposed over 160,000 private chat screenshots due to an unsecured cloud storage bucket, potentially compromising user privacy and raising concerns about data security in AI-assisted communication tools.
In a concerning development for AI-assisted communication tools, the dating advice app "FlirtAI" has been found to have leaked over 160,000 screenshots of private chats. Researchers at Cybernews discovered an unprotected Google Cloud Storage bucket owned by Buddy Network GmbH, the company behind FlirtAI, containing these sensitive user data 1.
FlirtAI, which markets itself as the "#1 AI Flirt Assistant Keyboard" on the App Store, is designed to help users craft "charming, personalized, and instant" messages for their dating app matches. The app encourages users to upload screenshots of their private conversations from various dating and chat platforms, including popular services like Tinder, Bumble, Hinge, WhatsApp, and Instagram 2.
Source: Mashable
The app's privacy policy states that by uploading screenshots, users imply that all parties involved have consented to the use of FlirtAI. However, this assumption seems unrealistic, as it would negate the purpose of using an AI assistant for private conversations. The policy also mentions that the app collects "prompts and text detected inside the uploaded screenshots" 2.
The data breach exposes users to various risks:
Source: TechRadar
Cybernews researchers noted that a large number of the leaked screenshots appeared to be from teenage users. This raises additional concerns, as data from minors is considered more sensitive and subject to stricter regulations regarding collection and processing practices 1.
FlirtAI's privacy policy contains contradictory information about age restrictions. One section states that minors may not use the app, while another allows teens over 13 to use it with parental permission 2. This inconsistency adds to the concerns about the app's data handling practices and user protection measures.
This incident highlights the potential risks associated with AI-powered communication tools. Researchers at Common Sense Media have warned that AI companions may not be safe for teens, as they can lead to emotional dependency. Additionally, studies from MIT suggest that relying on AI for writing tasks could potentially impair cognitive abilities 2.
After being notified by the Cybernews team, Buddy Network GmbH closed the exposed storage bucket. However, the incident raises questions about the company's data security practices and the need for stricter regulations in the AI-assisted communication sector 12.
As AI continues to play a larger role in our daily communications, this leak serves as a stark reminder of the importance of robust data protection measures and the potential consequences of entrusting sensitive information to AI-powered applications.
Netflix has incorporated generative AI technology in its original series "El Eternauta," marking a significant shift in content production methods for the streaming giant.
23 Sources
Technology
11 hrs ago
23 Sources
Technology
11 hrs ago
Meta declines to sign the European Union's voluntary AI code of practice, calling it an overreach that could stifle innovation and economic growth in Europe. The decision highlights growing tensions between tech giants and EU regulators over AI governance.
13 Sources
Policy and Regulation
11 hrs ago
13 Sources
Policy and Regulation
11 hrs ago
An advisory board convened by OpenAI recommends that the company should continue to be controlled by a nonprofit, emphasizing the need for democratic participation in AI development and governance.
6 Sources
Policy and Regulation
11 hrs ago
6 Sources
Policy and Regulation
11 hrs ago
Perplexity AI partners with Airtel to offer free Pro subscriptions, leading to a significant increase in downloads and user base in India, potentially reshaping the AI search landscape.
5 Sources
Technology
11 hrs ago
5 Sources
Technology
11 hrs ago
Perplexity AI, an AI-powered search engine startup, has raised $100 million in a new funding round, valuing the company at $18 billion. This development highlights the growing investor interest in AI startups and Perplexity's potential to challenge Google's dominance in internet search.
4 Sources
Startups
11 hrs ago
4 Sources
Startups
11 hrs ago