2 Sources
[1]
Vibe Coding Is Causing 'Thousands' of Data Security Vulnerabilities, Says Research
Vibe coding, which allows users who lack technical skills to create software applications with AI, has exploded in popularity in recent years, allowing non-devs to churn out apps in mere hours. But if you were thinking of turning to vibe coding to make a web app, cybersecurity firm RedAccess has some unsettling findings about the potential security vulnerabilities that could arise. In research first shared with Wired, a team led by security researcher Dor Zvi identified 5,000 vibe-coded web applications created using the AI software development tools Lovable, Replit, Base44, and Netlify that had "virtually no security or authentication of any kind." RedAccess claims that in some cases, anyone who found the correct web URL could access the apps and their data. Meanwhile, other vibe-coded web apps had "only trivial barriers" to accessing app data -- for example, signing in with "any email address." Zvi added that in 40% of cases, the apps exposed sensitive information such as medical data, financial data, corporate presentations, strategy documents, and conversations customers had with chatbots. This sensitive data allegedly included hospital work assignments containing the personally identifiable information of doctors, a firm's go-to-market strategy presentation, and sales and financial records from a variety of companies. Joel Margolis, a security researcher, outlined some of the issues involved in democratizing access to app development. "Somebody from a marketing team wants to create a website. They're not an engineer and they probably have little to no security background or knowledge," he told Wired. He added that unless these tools are asked to create secure appications "they're not going to go out of their way to do that." Many of the companies featured in the research have expressed objections. For example, Blake Brodie, a spokesperson for Wix, the owner of Base44, told Axios that RedAccess "deliberately withheld the URLs that would have allowed us to identify and examine the applications in question." In addition, he said the applications which were allegedly exposed had been "deliberately set to public by their owners." Brodie also told Wired that two examples of Base44-produced websites it was shown appeared to be test sites or contained AI-generated data. Meanwhile, Samyutha Reddy, a spokesperson for Lovable, told Axios that RedAccess's research did not "include any URLs or technical specifics that would allow us to verify, investigate, or act on the findings described," though the company said it was investigating the incident.
[2]
Vibe coding exposed 380,000 corporate apps -- 5,000 held sensitive data
Most enterprise security programs were built to protect servers, endpoints, and cloud accounts. None of them was built to find a customer intake form that a product manager vibe coded on Lovable over a weekend, connected to a live Supabase database, and deployed on a public URL indexed by Google. That gap now has a price tag. New research from Israeli cybersecurity firm RedAccess quantifies the scale. The firm discovered 380,000 publicly accessible assets, including applications, databases, and related infrastructure, built with vibe coding tools from Lovable, Base44, and Replit, as well as deployment platform Netlify. Roughly 5,000 of those assets, about 1.3%, contained sensitive corporate information. CEO Dor Zvi said his team found the exposure while researching shadow AI for customers. Axios independently verified multiple exposed apps, and Wired confirmed the findings separately. Among the verified exposures: a shipping company app detailed which vessels were expected at which ports. An internal health company application listed active clinical trials across the U.K. Full, unredacted customer service conversations for a British cabinet supplier sat on the open web. Internal financial information for a Brazilian bank was accessible to anyone who found the URL. The exposed data also included patient conversations at a children's long-term care facility, hospital doctor-patient summaries, incident response records at a security company, and ad purchasing strategies. Depending on jurisdiction and the data involved, the healthcare and financial exposures may trigger regulatory obligations under HIPAA, UK GDPR, or Brazil's LGPD. RedAccess found phishing sites built on Lovable that impersonated Bank of America, FedEx, Trader Joe's, and McDonald's. Lovable said it had begun investigating and removing the phishing sites. The defaults are the problem Privacy settings on several vibe coding platforms make apps publicly accessible unless users manually switch them to private. Many of these applications get indexed by Google and other search engines. Anyone can stumble across them. Zvi put it plainly: "I don't think it's feasible to educate the whole world around security. My mother is [vibe coding] with Lovable, and no offense, but I don't think she will think about role-based access." This is not an isolated finding In October 2025, Escape.tech scanned 5,600 publicly available vibe-coded applications and found more than 2,000 high-impact vulnerabilities, over 400 exposed secrets including API keys and access tokens, and 175 instances of personal data exposure containing medical records and bank account numbers. Every vulnerability Escape found was in a live production system, discoverable within hours. The full report documents the methodology. Escape separately raised an $18 million Series A led by Balderton in March 2026, citing the security gap opened by AI-generated code as a core market thesis. Gartner's "Predicts 2026" report forecasts that by 2028, prompt-to-app approaches adopted by citizen developers will increase software defects by 2,500%. Gartner identifies a new class of defect where AI generates code that is syntactically correct but lacks awareness of broader system architecture and nuanced business rules. The remediation costs for these deep contextual bugs will consume budgets previously allocated to innovation. Shadow AI is the multiplier IBM's 2025 Cost of a Data Breach Report found that 20% of organizations experienced breaches linked to shadow AI. Those incidents added $670,000 to the average breach cost, pushing the shadow AI breach average to $4.63 million. Among organizations that reported AI-related breaches, 97% lacked proper access controls. And 63% of breached organizations had no AI governance policy in place. Shadow AI breaches disproportionately exposed customer personally identifiable information at 65%, compared to 53% across all breaches, and affected data distributed across multiple environments 62% of the time. Only 34% of organizations with AI governance policies performed regular audits for unsanctioned AI tools. VentureBeat's shadow AI research estimated that actively used shadow apps could more than double by mid-2026. Cyberhaven data found 73.8% of ChatGPT workplace accounts in enterprise environments were unauthorized. What to do first The audit framework below gives CISOs a starting point for triaging vibe-coded app risk across five domains. The CISO who treats this as a policy problem will write a memo. The CISO who treats this as an architecture problem will deploy discovery scanning across the four largest vibe coding domains, require pre-deployment security review, extend the existing AppSec pipeline to citizen-built apps, and add those domains to DLP rules before the next board meeting. One of those CISOs avoids the next headline. The vibe coding exposure RedAccess documented is not a separate problem from shadow AI. It is shadow AI's production layer. Employees build internal tools on platforms that default to public, skip authentication, and never appear on any asset inventory, which means the applications stay invisible to security teams until a breach surfaces or a reporter finds them first. Traditional asset discovery tools were designed to find servers, containers, and cloud instances. They have no way to find a marketing configurator that a product manager built on Lovable over a weekend, connected to a Supabase database holding live customer records, and shared with three external contractors through a public URL that Google indexed within hours. The detection challenge runs deeper than most security teams realize. Vibe-coded apps deploy on platform subdomains that rotate frequently and often sit behind CDN layers that mask origin infrastructure. Organizations running mature, secure web gateways, CASB, or DNS logging can detect employee access to these domains. But detecting access is not the same as inventorying what was deployed, what data it holds, or whether it requires authentication. Without explicit monitoring of the major vibe coding platforms, the apps themselves generate a limited signal in conventional SIEM or endpoint telemetry. They exist in a gap between network visibility and application inventory that most security stacks were never architected to cover. The platform responses tell the story Replit CEO Amjad Masad said RedAccess gave his company only 24 hours before going to the press. Base44 (via Wix) and Lovable both said RedAccess did not include the URLs or technical specifics needed to verify the findings. None of the platforms denied that the exposed applications existed. Wiz Research separately discovered in July 2025 that Base44 contained a platform-wide authentication bypass. Exposed API endpoints allowed anyone to create a verified account on private apps using nothing more than a publicly visible app_id. The flaw meant that showing up to a locked building and shouting a room number was enough to get the doors open. Wix fixed the vulnerability within 24 hours after Wiz reported it, but the incident exposed how thin the authentication layer is on platforms where millions of apps are being built by users who assume the platform handles security for them. The pattern is consistent across the vibe coding ecosystem. CVE-2025-48757 documented insufficient or missing Row-Level Security policies in Lovable-generated Supabase projects. Certain queries skipped access checks entirely, exposing data across more than 170 production applications. The AI generated the database layer. It did not generate the security policies that should have restricted who could read the data. Lovable disputes the CVE classification, stating that individual customers accept responsibility for protecting their application data. That dispute itself illustrates the core tension: platforms that market to nontechnical builders are shifting security responsibility to users who do not know it exists. What this means for security teams The RedAccess findings complete the picture. Professional agents face credential theft on one layer. Citizen platforms face data exposure on the other. The structural failure is the same. Security review happens after deployment or not at all. Identity and access management systems track human users and service accounts. They do not track the Lovable app a sales operations analyst deployed last Tuesday, connected to a live CRM database, and shared with three external contractors via a public URL. Nobody asks whether the database policies restrict who can read the data or whether the API endpoints require authentication. When those questions go unasked at AI-generation speed, the exposure scales faster than any human review process can match. The question for security leaders is not whether vibe-coded apps are inside their perimeter. The question is how many, holding what data, visible to whom. The RedAccess findings suggest the answer, for most organizations, is worse than anyone in the C-suite currently knows. The organizations that start scanning this week will find them. The ones that wait will read about themselves next.
Share
Copy Link
Israeli cybersecurity firm RedAccess discovered 380,000 publicly accessible assets built with vibe coding tools, with 5,000 containing sensitive corporate, medical, and financial information. The findings highlight critical data security vulnerabilities as non-technical users create AI-generated applications without proper safeguards, exposing patient records, banking data, and corporate strategy documents.
Vibe coding has triggered a sprawling security crisis across enterprise environments, with Israeli cybersecurity firm RedAccess uncovering 380,000 publicly accessible assets built using AI-generated applications from platforms including Lovable, Base44, Replit, and deployment service Netlify
2
. Among these discoveries, approximately 5,000 assets—roughly 1.3% of the total—contained sensitive data ranging from medical information to financial information and corporate strategy documents1
. Security researcher Dor Zvi led the investigation, which Axios and Wired independently verified, confirming multiple exposed corporate apps with virtually no security or authentication mechanisms in place2
.
Source: VentureBeat
The exposed sensitive data included hospital work assignments containing personally identifiable information of doctors, patient conversations at children's long-term care facilities, and doctor-patient summaries
1
2
. Financial exposures ranged from internal banking data at a Brazilian institution to sales records across multiple companies, while corporate intelligence leaks included go-to-market strategy presentations and ad purchasing plans2
. In some cases, anyone discovering the correct URL could access these applications and their data, while other citizen-built applications required only trivial barriers such as signing in with any email address1
.The core problem stems from how prompt-to-app approaches enable users without technical expertise to build functional applications in hours, yet these same users lack security knowledge to protect what they create. Security researcher Joel Margolis explained the fundamental challenge: "Somebody from a marketing team wants to create a website. They're not an engineer and they probably have little to no security background or knowledge"
1
. Unless specifically instructed to build secure applications, AI-generated code defaults to functionality over protection, creating data security vulnerabilities at scale1
.Privacy settings on several vibe coding platforms default to making apps publicly accessible unless users manually switch them to private, with many applications getting indexed by Google and other search engines
2
. Zvi captured the education challenge bluntly: "I don't think it's feasible to educate the whole world around security. My mother is [vibe coding] with Lovable, and no offense, but I don't think she will think about role-based access"2
. This democratization of app development has created what enterprise security teams now recognize as shadow AI's production layer—applications built outside IT oversight that connect to live databases and process real business data2
.
Source: PC Magazine
IBM's 2025 Cost of a Data Breach Report found that 20% of organizations experienced breaches linked to shadow AI, with those incidents adding $670,000 to average breach costs, pushing shadow AI breach totals to $4.63 million
2
. Among organizations reporting AI-related breaches, 97% lacked proper access controls while 63% had no AI governance policy in place2
. These shadow AI breaches disproportionately exposed customer personally identifiable information at 65%, compared to 53% across all breaches, with affected data distributed across multiple environments 62% of the time2
.Gartner's "Predicts 2026" report forecasts that by 2028, prompt-to-app approaches adopted by citizen developers will increase software defects by 2,500%
2
. Gartner identifies a new defect class where AI generates syntactically correct code that lacks awareness of broader system architecture and nuanced business rules, with remediation costs consuming budgets previously allocated to innovation2
. Separate research from Escape.tech in October 2025 scanned 5,600 publicly available vibe-coded applications and discovered over 2,000 high-impact vulnerabilities, more than 400 exposed secrets including API keys, and 175 instances of personal data exposure containing patient records and bank account numbers2
.Related Stories
Blake Brodie, spokesperson for Wix which owns Base44, told Axios that RedAccess "deliberately withheld the URLs that would have allowed us to identify and examine the applications in question," adding that exposed applications had been "deliberately set to public by their owners"
1
. Brodie also noted that two Base44-produced websites examined appeared to be test sites or contained AI-generated data1
. Samyutha Reddy, spokesperson for Lovable, stated that RedAccess's research did not "include any URLs or technical specifics that would allow us to verify, investigate, or act on the findings described," though the company began investigating1
.Depending on jurisdiction and data types involved, healthcare and financial exposures may trigger regulatory obligations under HIPAA, UK GDPR, or Brazil's LGPD
2
. RedAccess also identified phishing sites built on Lovable impersonating Bank of America, FedEx, Trader Joe's, and McDonald's, with Lovable confirming it had begun investigating and removing these sites2
. CISOs now face a choice between treating this as a policy problem requiring memos or as an architecture problem demanding deployment of discovery scanning across vibe coding domains, pre-deployment security review, and extending existing AppSec pipelines to citizen-built applications2
.Summarized by
Navi
26 Nov 2025•Technology

12 Sept 2025•Technology

21 Oct 2025•Technology

1
Technology

2
Entertainment and Society

3
Policy and Regulation
