The promise of "vibe-coding"—the ability to manifest complex software through little more than natural language prompts—rests on a delicate layer of trust. That layer appears to have fractured at Lovable, a platform designed to turn AI-driven conversations into functional applications. A security researcher, operating under the pseudonym @weezerOSINT, recently revealed that the service exposed a vast trove of sensitive data, including source code, database credentials, and internal AI chat histories, to anyone with a free account.
The exposure, surfaced via the platform’s application programming interface (API), reportedly affects projects created before November 2025. By accessing the API, the researcher was able to view the underlying logic and private data of other users’ projects, effectively pulling back the curtain on the "vibe-coding" process. The leak highlights a recurring tension in the current AI boom: the speed at which these platforms are deployed often outpaces the implementation of basic security hygiene.
Perhaps most telling is how the vulnerability was discovered. The researcher noted that they utilized xAI’s Grok 4.2 model to conduct the audit, identifying the exposure in just 30 minutes. Before the advent of such advanced LLMs, finding a flaw of this scale would typically require hours or days of manual reconnaissance. It is a stark reminder of a new symmetry in the software landscape: as AI lowers the barrier to building applications, it simultaneously equips researchers—and potentially bad actors—with the tools to dismantle them with equal efficiency.
Despite the issue being reported through the vulnerability disclosure platform HackerOne in early March, the researcher demonstrated this week that projects predating the fix remain vulnerable. As the industry pivots toward autonomous agents and automated development, the Lovable breach serves as a quiet warning that the "vibes" of a platform are only as secure as the code beneath them.
With reporting from Fast Company.
Source · Fast Company



