AI Apps Data Leak Exposes 2 Million User Files as Android Users Face Widespread Privacy Risks

Reviewed byNidhi Govil

2 Sources

Share

Misconfigured Google Cloud storage in popular AI apps has exposed nearly 2 million private user images and videos from Android users. Cybernews researchers discovered that Video AI Art Generator & Maker and other unsecured AI applications leaked billions of records, with 72% of analyzed apps showing similar critical vulnerabilities that put personal data at risk.

AI Apps Expose Millions of User Files Through Misconfigured Google Cloud Storage

A significant data leak affecting Android users has emerged from unsecured AI applications available on the Google Play store, exposing nearly 2 million private photos and videos. Cybernews researchers discovered that the "Video AI Art Generator & Maker" app contained a misconfigured Google Cloud storage bucket accessible to anyone who knew where to look, leaking personal data on an alarming scale

1

.

Source: Mashable

Source: Mashable

The exposed bucket contained 8.27 million media files in total, including more than 1.5 million user images and over 385,000 videos that Android users had uploaded for AI-powered editing. Additionally, the leak included 2.87 million AI-generated videos, more than 386,000 AI-generated audio files, and 2.87 million AI-generated images

1

. The app, which offered AI-generated makeovers for photos and videos, had been storing user-uploaded multimedia since its launch in mid-June 2023 and had accumulated 500,000 downloads

1

2

.

Source: TechRadar

Source: TechRadar

Critical Vulnerabilities Plague Majority of AI Applications

The Video AI Art Generator & Maker incident represents just one example of a broader pattern affecting unsecured AI applications. Cybersecurity experts analyzing hundreds of Google Play apps found that 72 percent had similar security flaws, making them a "treasure trove" of leaked data

2

. Many of these AI apps engage in a highly criticized practice known as "hardcoding secrets," where developers embed sensitive information such as API keys, passwords, or encryption keys directly into the app's source code

2

.

The app was allegedly developed by Codeway Dijital Hizmetler Anonim Sirketi, a private company registered in Turkey. Notably, another Codeway app called Chat & Ask AI also suffered from a misconfigured backend using Google Firebase, exposing 300 million messages tied to 25 million users in early February 2025

1

. This pattern of vulnerabilities across multiple applications from the same developer raises concerns about systematic security oversights.

Privacy Risks Extend Beyond Photo Editing Apps

The scope of leaking personal data extends beyond image and video editing tools. Another app called IDMerit, used for identity verification, exposed know-your-customer data and personally identifiable information from users across 25 countries, predominantly in the U.S.

2

. The exposed information included full names and addresses, birthdates, IDs, and contact information, constituting a full terabyte of data

2

.

Researchers from Cybernews noted that developers secured the Video AI Art Generator & Maker database after disclosure, and IDMerit similarly resolved its vulnerabilities once notified

1

2

. However, the reactive nature of these fixes highlights a troubling trend in the AI app ecosystem.

Security Shortcuts Compromise User Data Protection

According to researchers, these exposing user data incidents reveal how some AI apps prioritize fast product delivery while skipping crucial security features, such as enabling authentication for critical cloud storage buckets used to store user data, including user images and videos

1

. Cybersecurity experts warn that lax security trends among these types of AI apps pose a widespread risk to Android users, particularly as these applications often store user-uploaded files alongside AI-generated content

2

.

The discovery that over 12 terabytes of users' media files were accessible via the exposed bucket in the Video AI Art Generator & Maker app alone demonstrates the massive scale of potential privacy risks

2

. As AI-powered editing and generation tools continue to gain popularity, users should scrutinize the security practices of apps before uploading sensitive personal content, watching for transparent privacy policies and verified developer credentials on the Google Play store.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo