Your product manager built a customer intake form on Lovable over a weekend. It's connected to a live Supabase database, deployed on a public URL, and indexed by Google. You have no idea it exists. That gap now has a price tag: $4.63 million, the average cost of a shadow AI breach according to IBM's 2025 Cost of a Data Breach Report.

New research from Israeli cybersecurity firm RedAccess reveals the scale: 380,000 publicly accessible assets built with vibe coding tools from Lovable, Base44, and Replit, plus deployment platform Netlify. Roughly 5,000 of those assets—1.3%—contained sensitive corporate information. Axios and Wired independently verified the findings. Among the exposures: a shipping company's port schedules, an internal health company's clinical trials, unredacted customer service conversations for a British cabinet supplier, and internal financial data for a Brazilian bank. Also exposed: patient conversations at a children's long-term care facility, hospital doctor-patient summaries, incident response records at a security company, and ad purchasing strategies.

This is not an isolated finding. In October 2025, Escape.tech scanned 5,600 publicly available vibe-coded applications and found more than 2,000 high-impact vulnerabilities, over 400 exposed secrets including API keys and access tokens, and 175 instances of personal data exposure containing medical records and bank account numbers. Every vulnerability was in a live production system, discoverable within hours. Escape.tech subsequently raised an $18 million Series A led by Balderton in March 2026, citing the security gap opened by AI-generated code as a core market thesis.

Gartner's 'Predicts 2026' report forecasts that by 2028, prompt-to-app approaches adopted by citizen developers will increase software defects by 2,500%. Gartner identifies a new class of defect where AI generates code that is syntactically correct but lacks awareness of broader system architecture and nuanced business rules. The remediation costs for these deep contextual bugs will consume budgets previously allocated to innovation.

The Structural Failure: Defaults Are the Problem

Privacy settings on several vibe coding platforms make apps publicly accessible unless users manually switch them to private. Many of these applications get indexed by Google and other search engines. Anyone can stumble across them. RedAccess CEO Dor Zvi put it plainly: “I don’t think it’s feasible to educate the whole world around security. My mother is [vibe coding] with Lovable, and no offense, but I don’t think she will think about role-based access.”

The pattern is consistent across the vibe coding ecosystem. CVE-2025-48757 documented insufficient or missing Row-Level Security policies in Lovable-generated Supabase projects. Certain queries skipped access checks entirely, exposing data across more than 170 production applications. The AI generated the database layer. It did not generate the security policies that should have restricted who could read the data. Lovable disputes the CVE classification, stating that individual customers accept responsibility for protecting their application data. That dispute itself illustrates the core tension: platforms that market to nontechnical builders are shifting security responsibility to users who do not know it exists.

Wiz Research separately discovered in July 2025 that Base44 contained a platform-wide authentication bypass. Exposed API endpoints allowed anyone to create a verified account on private apps using nothing more than a publicly visible app_id. The flaw meant that showing up to a locked building and shouting a room number was enough to get the doors open. Wix fixed the vulnerability within 24 hours after Wiz reported it, but the incident exposed how thin the authentication layer is on platforms where millions of apps are being built by users who assume the platform handles security for them.

Shadow AI: The Multiplier

IBM's 2025 Cost of a Data Breach Report found that 20% of organizations experienced breaches linked to shadow AI. Those incidents added $670,000 to the average breach cost, pushing the shadow AI breach average to $4.63 million. Among organizations that reported AI-related breaches, 97% lacked proper access controls. And 63% of breached organizations had no AI governance policy in place. Shadow AI breaches disproportionately exposed customer personally identifiable information at 65%, compared to 53% across all breaches, and affected data distributed across multiple environments 62% of the time. Only 34% of organizations with AI governance policies performed regular audits for unsanctioned AI tools. Cyberhaven data found 73.8% of ChatGPT workplace accounts in enterprise environments were unauthorized.

The vibe coding exposure RedAccess documented is not a separate problem from shadow AI. It is shadow AI's production layer. Employees build internal tools on platforms that default to public, skip authentication, and never appear on any asset inventory, which means the applications stay invisible to security teams until a breach surfaces or a reporter finds them first. Traditional asset discovery tools were designed to find servers, containers, and cloud instances. They have no way to find a marketing configurator that a product manager built on Lovable over a weekend, connected to a Supabase database holding live customer records, and shared with three external contractors through a public URL that Google indexed within hours.

Winners & Losers

Winners: Security vendors like RedAccess, Escape.tech, and Wiz are positioned to capture a growing market for shadow AI detection and remediation. Their findings highlight critical vulnerabilities, driving demand for their services and investments. AI governance and compliance consultancies will also benefit as organizations urgently need policies and audits to manage shadow AI risks. Platform providers that improve security—like Wix fixing Base44 quickly—can build trust and differentiate themselves from competitors.

Losers: Organizations using vibe-coded apps without oversight face data breaches, financial losses ($4.63M avg), and reputational harm. Platform providers with weak security—like Lovable and Base44—suffer brand damage as vulnerabilities and phishing incidents become public. Customers whose PII is exposed face privacy violations and potential identity theft.

Second-Order Effects

The market will bifurcate: low-code/no-code platforms that fail to embed security will be marginalized, while those that offer robust built-in protections and compliance features will gain enterprise trust. 'Shadow AI' will become a board-level risk, driving permanent investment in AI governance frameworks and continuous monitoring solutions. Regulatory scrutiny will intensify: the healthcare and financial exposures may trigger obligations under HIPAA, UK GDPR, or Brazil's LGPD. Expect class-action lawsuits and regulatory fines that will set precedents for AI-generated code liability.

Phishing sites built on Lovable impersonating Bank of America, FedEx, Trader Joe’s, and McDonald’s show that vibe coding platforms are now attack vectors. The same ease of use that empowers citizen developers also empowers threat actors. This will force platform providers to implement proactive abuse detection and takedown processes, or face regulatory action.

Market / Industry Impact

The cybersecurity industry will see a surge in demand for tools that discover and secure vibe-coded applications. Expect M&A activity as larger security vendors acquire startups specializing in shadow AI detection. The application security (AppSec) market will expand to cover citizen-built apps, with new categories for 'AI-generated code security' emerging. Gartner's prediction of a 2,500% increase in software defects by 2028 will accelerate adoption of AI-powered code analysis and remediation tools.

For enterprise software buyers, security will become a key differentiator when evaluating low-code/no-code platforms. Platforms that cannot demonstrate robust security defaults, authentication controls, and data loss prevention will lose enterprise deals. This will force platform providers to invest heavily in security features or risk obsolescence.

Executive Action

  • Run DNS + certificate transparency scans for Lovable, Replit, Base44, and Netlify subdomains tied to corporate assets. Identify all vibe-coded apps deployed by employees.
  • Block unauthenticated apps from accessing internal data sources. Require SSO/SAML integration before any vibe-coded app can connect to corporate systems.
  • Publish an acceptable-use policy for AI coding tools with a pre-deployment review gate. Extend existing AppSec pipeline to cover vibe-coded deployments and add vibe coding platform domains to DLP rules.

Why This Matters

The RedAccess findings complete the picture: shadow AI has a production layer that is invisible to traditional security tools. The organizations that start scanning this week will find their exposed apps. The ones that wait will read about themselves in the next headline. The cost of inaction is not just a breach—it's regulatory fines, reputational damage, and loss of customer trust. The time to act is now.

Final Take

The vibe coding revolution is real, but its security implications are dire. The platforms that enable rapid application development have failed to embed basic security controls, and the organizations that adopt these tools without governance are courting disaster. The CISO who treats this as a policy problem will write a memo. The CISO who treats this as an architecture problem will deploy discovery scanning, require pre-deployment security review, extend the AppSec pipeline, and add DLP rules before the next board meeting. One of those CISOs avoids the next headline. The choice is clear.




Source: VentureBeat

Rate the Intelligence Signal

Intelligence FAQ

Shadow AI refers to unauthorized AI tools used by employees without IT approval. The crisis is that vibe-coded apps—built with tools like Lovable—are deployed publicly by default, exposing sensitive corporate data. RedAccess found 5,000 such apps leaking data, and IBM reports shadow AI breaches cost $4.63M on average.

Run DNS and certificate transparency scans for subdomains of Lovable, Replit, Base44, and Netlify. Use security tools that monitor these platforms. Also, extend your existing AppSec pipeline to scan for citizen-built apps and add these domains to DLP rules.

Exposed healthcare data may violate HIPAA; EU citizen data may breach GDPR; Brazilian data may trigger LGPD. Regulatory fines, class-action lawsuits, and reputational damage are likely. Organizations must implement governance policies and regular audits.

Any industry with sensitive data: healthcare, finance, legal, and technology. The RedAccess findings included a shipping company, a health company, a bank, and a children's care facility. Customer PII exposure is 65% in shadow AI breaches, higher than the 53% average.

1) Discover all vibe-coded apps via domain scans. 2) Block unauthenticated apps from accessing internal data. 3) Publish an AI acceptable-use policy with pre-deployment review. 4) Extend AppSec and DLP to cover vibe coding platforms. 5) Conduct regular audits for unsanctioned AI tools.