RedAccess, an Israeli cybersecurity firm, uncovered 380,000 publicly accessible assets built with popular AI code generation tools. About 5,000 of those exposed sensitive corporate data, revealing a critical gap in enterprise security infrastructure.

The discovery highlights a new threat vector: shadow AI applications. Employees use tools like Lovable, Base44, and Replit to rapidly prototype and deploy applications without involving security teams. A product manager can build a customer intake form connected to a live database, deploy it to a public URL, and have it indexed by Google search engines within hours. Traditional security programs never see it coming.

This mirrors the S3 bucket crisis of the mid-2010s, when misconfigured Amazon cloud storage exposed millions of records. Then, organizations discovered that developers had inadvertently leaked data at scale. Now the problem repeats with AI-accelerated development.

The math is stark. RedAccess found roughly 1.3% of these vibe-coded apps contained sensitive information. With thousands of enterprises using these tools, the aggregate exposure becomes massive. Employees aren't acting maliciously. They're shipping faster than ever before, which is the entire value proposition of AI coding tools. Security simply hasn't caught up.

CEO Dor Zvi's findings underscore a painful reality for CISO teams: they cannot control where developers build applications anymore. Traditional perimeter-based security, firewalls, and endpoint protection become irrelevant when the threat lives on Netlify URLs and public databases.

Organizations now face three urgent problems. First, they cannot inventory shadow AI applications at scale without new tooling. Second, they lack policies governing when employees can use AI code generation tools. Third, they have no enforcement mechanism short of blocking access entirely, which kills productivity.

The solution requires rethinking enterprise security from scratch. Teams need continuous visibility into what employees build