2 Sources
2 Sources
[1]
Top Android AI photo and video editor exposes nearly two million user images and videos
Misconfigured database holding millions of images and videos found online * Cybernews found misconfigured database in "Video AI Art Generator & Maker" app * Leak exposed 8.27m media files, including 2m private user photos and videos * Developers secured database after disclosure; similar flaws seen in another Codeway app Yet another misconfigured database leaking sensitive user data was found, but this one is even more worrying since the data being leaked is - user-uploaded photos and videos. Researchers from Cybernews recently discovered an Android app called "Video AI Art Generator & Maker" contained a misconfigured Google Cloud storage bucket which was accessible to anyone who knew where to look. In total, more than 1.5 million user images, and more than 385,000 videos were stored in the bucket. Furthermore, it stored more than 2.87 million AI-generated videos, more than 386,000 AI-generated audio files, and more than 2.87 million AI-generated images. Several vulnerable apps The app offered AI-generated makeovers for photos and videos - something that is particularly popular these days. It was launched in mid-June 2023 and according to Cybernews, has been storing the multimedia people upload since then. So, in total, the exposed bucket contained 8.27 million media files, 2 million of which were private, user-uploaded content. The app was allegedly developed by Codeway Dijital Hizmetler Anonim Sirketi, a private company registered in Turkey, but we could not find this app on the Play Store, or the developer's official website. Codeway's dedicated Play Store page shows only three apps. However, the official website does demonstrate a different app, called Chat & Ask AI, and this one also had a misconfigured backend using Google Firebase. In early February 2026, an independent researcher found that this app, one of the most popular ones in its category, exposed 300 million messages tied to 25 million users. Still, Cybernews said it managed to get in touch with the developers, who secured the Video AI Art Generator & Maker database soon after. "This data leak shows how some AI apps prioritize fast product delivery, skipping crucial security features, such as enabling authentication for the critical cloud storage bucket used to store user data, including images and videos," the researchers explained. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[2]
Unsecured AI apps are leaking personal data of Android users
AI apps are a "treasure trove" of leaked data, researchers say. Credit: Anadolu Agency / Contributor / Anadolu via Getty Images Not every AI tool you stumble across in your phone's app marketplace is the same. In fact, many of them may be more of a privacy gamble than you would have previously thought. A plethora of unlicensed or unsecured AI apps on the Google Play store for Android, including those marketed for identity verification and editing, have exposed billions of records and personal data, cybersecurity experts have confirmed. A recent investigation by Cybernews found that one Android-available app in particular, "Video AI Art Generator & Maker," has leaked 1.5 million user images, over 385,000 videos, and millions of user AI-generated media files. The security flaw was spotted by researchers, who discovered a misconfiguration in a Google Cloud Storage bucket that left personal files vulnerable to outsiders. In total, the publication reported, over 12 terabytes of users' media files were accessible via the exposed bucket. The app had 500,000 downloads at the time. Another app, called IDMerit, exposed know-your-customer data and personally identifiable information from users across 25 countries, predominantly in the U.S. Information included full names and addresses, birthdates, IDs, and contact information constituting a full terabyte of data. Both of the apps' developers resolved the vulnerabilities after researchers notified them. Still, cybersecurity experts warn that lax security trends among these types of AI apps pose a widespread risk to users. Many AI apps, which often store user-uploaded files alongside AI-generated content, also use a highly criticized practice known as "hardcoding secrets," embedding sensitive information such as API keys, passwords, or encryption keys directly into the app's source code. Cybernews found that 72 percent of the hundreds of Google Play apps researchers analyzed had similar security vulnerabilities.
Share
Share
Copy Link
Misconfigured Google Cloud storage in popular AI apps has exposed nearly 2 million private user images and videos from Android users. Cybernews researchers discovered that Video AI Art Generator & Maker and other unsecured AI applications leaked billions of records, with 72% of analyzed apps showing similar critical vulnerabilities that put personal data at risk.
A significant data leak affecting Android users has emerged from unsecured AI applications available on the Google Play store, exposing nearly 2 million private photos and videos. Cybernews researchers discovered that the "Video AI Art Generator & Maker" app contained a misconfigured Google Cloud storage bucket accessible to anyone who knew where to look, leaking personal data on an alarming scale
1
.
Source: Mashable
The exposed bucket contained 8.27 million media files in total, including more than 1.5 million user images and over 385,000 videos that Android users had uploaded for AI-powered editing. Additionally, the leak included 2.87 million AI-generated videos, more than 386,000 AI-generated audio files, and 2.87 million AI-generated images
1
. The app, which offered AI-generated makeovers for photos and videos, had been storing user-uploaded multimedia since its launch in mid-June 2023 and had accumulated 500,000 downloads1
2
.
Source: TechRadar
The Video AI Art Generator & Maker incident represents just one example of a broader pattern affecting unsecured AI applications. Cybersecurity experts analyzing hundreds of Google Play apps found that 72 percent had similar security flaws, making them a "treasure trove" of leaked data
2
. Many of these AI apps engage in a highly criticized practice known as "hardcoding secrets," where developers embed sensitive information such as API keys, passwords, or encryption keys directly into the app's source code2
.The app was allegedly developed by Codeway Dijital Hizmetler Anonim Sirketi, a private company registered in Turkey. Notably, another Codeway app called Chat & Ask AI also suffered from a misconfigured backend using Google Firebase, exposing 300 million messages tied to 25 million users in early February 2025
1
. This pattern of vulnerabilities across multiple applications from the same developer raises concerns about systematic security oversights.The scope of leaking personal data extends beyond image and video editing tools. Another app called IDMerit, used for identity verification, exposed know-your-customer data and personally identifiable information from users across 25 countries, predominantly in the U.S.
2
. The exposed information included full names and addresses, birthdates, IDs, and contact information, constituting a full terabyte of data2
.Researchers from Cybernews noted that developers secured the Video AI Art Generator & Maker database after disclosure, and IDMerit similarly resolved its vulnerabilities once notified
1
2
. However, the reactive nature of these fixes highlights a troubling trend in the AI app ecosystem.Related Stories
According to researchers, these exposing user data incidents reveal how some AI apps prioritize fast product delivery while skipping crucial security features, such as enabling authentication for critical cloud storage buckets used to store user data, including user images and videos
1
. Cybersecurity experts warn that lax security trends among these types of AI apps pose a widespread risk to Android users, particularly as these applications often store user-uploaded files alongside AI-generated content2
.The discovery that over 12 terabytes of users' media files were accessible via the exposed bucket in the Video AI Art Generator & Maker app alone demonstrates the massive scale of potential privacy risks
2
. As AI-powered editing and generation tools continue to gain popularity, users should scrutinize the security practices of apps before uploading sensitive personal content, watching for transparent privacy policies and verified developer credentials on the Google Play store.Summarized by
Navi
[1]
20 Jan 2026•Technology

16 Jul 2025•Technology

11 Nov 2025•Technology

1
Policy and Regulation

2
Policy and Regulation

3
Business and Economy
