Neon App Promises Return After Privacy Breach, Raising Concerns About AI Training Data

2 Sources

Share

The Neon app, which pays users for their phone call recordings to sell to AI companies, plans to return after a major security flaw forced it offline. The incident has sparked debates about privacy, data security, and the ethics of AI training data collection.

News article

Neon App's Rapid Rise and Fall

Neon, a controversial app that pays users for their phone call recordings to sell to AI companies for training data, has pledged to return following a significant privacy breach. The app quickly gained popularity, reaching the second spot among social apps and sixth overall in the App Store

2

. However, its meteoric rise was abruptly halted when a major security flaw was discovered, forcing the app to go offline

1

2

.

The Security Breach

The privacy breach, reported by TechCrunch, allowed anyone to access other users' phone numbers, call recordings, and transcripts

1

2

. This vulnerability exposed not only the personal data of Neon users but also potentially compromised the privacy of non-users who were unknowingly recorded during calls. The severity of the breach led to the immediate suspension of the app's services

1

.

Neon's Business Model and Ethical Concerns

Neon's business model revolves around paying users for their call recordings, which are then sold to AI companies as training data. The app offers up to $30 per day for recordings, with rates varying depending on whether both parties are Neon users

1

. This approach has raised significant ethical concerns, particularly regarding consent and privacy

2

.

The app claims to record only the user's side of the call unless both parties are using Neon. However, cybersecurity experts have questioned this claim, suggesting that the app might record both sides of the conversation and later remove the non-user's audio from the transcript

2

.

Legal and Ethical Implications

Lawyers have warned that Neon's practices may not comply with laws in states requiring two-party consent for audio recordings. Users could potentially face both criminal and civil liabilities for using the app

2

. Additionally, reports suggest that some users were attempting to game the system by secretly recording real-world conversations without consent, further complicating the ethical landscape

2

.

Neon's Response and Future Plans

Neon founder Alex Kiam has apologized for the incident and promised to enhance security measures before relaunching the app. In an email to users, Kiam assured that their earnings were safe and would be paid out upon the app's return, along with a bonus for their patience

1

2

. Despite the controversy, Neon seems determined to make a comeback, raising questions about the future of data collection for AI training and the balance between innovation and privacy protection.

Broader Implications for AI and Privacy

The Neon controversy highlights the growing tension between the demand for diverse data to train AI systems and the need to protect individual privacy. As AI companies seek ever-larger datasets to improve their models, apps like Neon present a tempting source of real-world conversational data. However, the incident underscores the potential risks and ethical challenges associated with such data collection methods, prompting a broader discussion about responsible AI development and data privacy in the digital age.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo