SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Cinematic glowing server room unlocked cabinet data breach void

Moltbook ‘vibe-coded’ flaw exposed AI chats & keys

Fri, 6th Feb 2026

A new social media service called Moltbook left its backend database publicly accessible due to a security misconfiguration, exposing private AI conversations, user email addresses, and large volumes of API keys, according to Wiz Security.

The issue stemmed from a Supabase API key exposed in client-side JavaScript. Supabase is an open-source backend service built around hosted PostgreSQL databases and REST APIs. Wiz researchers said the key allowed both read and write access because Moltbook had not enabled protections at the database layer.

Wiz said the exposed data included thousands of private conversations between AI agents, about 30,000 user email addresses, and around 1.5 million API keys. Researchers also found users had shared third-party credentials, including OpenAI API keys, in private messages.

Moltbook secured the database after Wiz reported the vulnerability to its creator, Matt Schlicht. In a post on X before Wiz published its findings, Schlicht said he "didn't write one line of code" for the site.

Supabase settings

Wiz described the exposure as a Row Level Security misconfiguration. Row Level Security is a Supabase control that restricts access to data through its APIs. Without Row Level Security policies, a public API key can provide broad access to a database.

"Supabase is a popular open source Firebase alternative providing hosted PostgreSQL databases with REST APIs. It's become especially popular with vibe-coded applications due to its ease of setup," said Gal Nagli, Wiz's head of threat exposure.

"When properly configured with Row Level Security (RLS), the public API key is safe to expose - it acts like a project identifier. However, without RLS policies, this key grants full database access to anyone who has it. In Moltbook's implementation, this critical line of defense was missing."

Vibe coding risks

The incident has drawn attention to "vibe coding," a development approach that relies heavily on AI coding tools and rapid iteration. Security teams and developers have warned that speed can increase the risk of deployment mistakes, particularly around infrastructure and access controls.

"As we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security," said Ami Luttwak, Wiz cofounder.

Several security and software governance firms said the Moltbook case fits a pattern they expect to see more often as AI-generated code reaches production without formal review.

"This leads to another mandatory step: testing. Zero-trust principles should also be applied to Vibe coding. Vibe-coded solutions can miss basic security practices, and configuration or misconfiguration issues are often outside the scope of the code itself. I'm glad Wiz Security caught this before the damage spread further," said Lydia Zhang, president and co-founder of Ridge Security Technology.

Others pointed to the gap between generating functional code and deploying secure defaults. "The Moltbook incident shows what happens when people shipping production applications have no security training and are relying entirely on AI-generated code. The creator said publicly that he didn't write a single line of code. Current AI coding tools don't reason about security on the developer's behalf. They generate functional code, not secure code," said Michael Bell, founder and CEO of Suzu Labs.

Write access concern

Elevated write access increased the risk beyond passive data exposure. With the same key, an attacker could have altered database content, including posts and messages that AI agents might read and respond to.

Bell said this kind of access changes the threat model for services built around autonomous agents and automated interactions. "The write access vulnerability should concern anyone building AI agent infrastructure. Moltbook wasn't just leaking data. Anyone with the exposed API key could modify posts that AI agents were reading and responding to. That's prompt injection at ecosystem scale. You could manipulate the information environment that shapes how thousands of AI agents behave," he said.

Bell also questioned the meaning of platform activity metrics when bots outnumber humans and controls are limited. "The 88:1 agent-to-human ratio should make everyone skeptical of AI adoption metrics going forward. Moltbook claimed 1.5 million agents. The reality was 17,000 humans running bot armies. No rate limiting. No verification. The platform couldn't distinguish between an actual AI agent and a human with a script pretending to be one," he said.

Governance and review

Database governance specialists said the incident underscored the need to manage permission changes and security policies with the same discipline as application code changes. "Moltbook is a textbook example of what happens when you ship at AI speed without change control at the database layer. A single missing guardrail turned a 'public' Supabase key into full read and write access, exposing private agent conversations, user emails, and a massive pile of credentials. This is why Database Change Governance matters," said Ryan McCurdy, vice president of marketing at Liquibase.

Security engineers also pointed to the broader risk of relying on generated code without oversight. "Matt Schlicht's admission that he 'didn't write one line of code' isn't something to celebrate, given the fundamental nature of the security flaw. The database completely lacked Row Level Security (RLS) policies, allowing anyone to access it without authentication. This misconfiguration exposed the entire database structure and content, including tokens that granted read/write/edit access to non-authenticated users - a basic oversight with serious consequences," said Noelle Murata, senior security engineer at Xcape.

"Vibe-coding," or relying on AI to generate code, can produce functional results but often sacrifices best practices in architecture and security for speed and convenience. Without code review or highly specific prompting, AI-generated code prioritizes 'fast and easy' over 'resilient and secure.' This is analogous to why junior developers need oversight; the same principle applies to AI-generated code," Murata said.

Wiz and other commentators framed Moltbook as an example of how small configuration decisions can quickly create broad exposure when projects rely on managed backend services. Bell said teams deploying AI-generated applications should treat configuration as security-critical work. "AI development velocity and AI security maturity are on completely different curves. Teams are shipping production applications in days. Security practices haven't caught up. Until AI tools start generating secure defaults and flagging dangerous configurations automatically, humans (or hackers) need to be in the loop reviewing what gets deployed," he said.