logo

Select Sidearea

Populate the sidearea with useful widgets. It’s simple to add images, categories, latest post, social media icon links, tag clouds, and more.
[email protected]
+1234567890

I Built an App with Claude Code in 20 Minutes. It Had 14 Security Vulnerabilities

Imagine this: someone in your organization — no coding experience, no IT background — builds a fully working app during their lunch break. It connects to your internal systems, processes customer data, and goes live the same afternoon. No security review. No oversight. No one in IT even knows it exists.

This isn’t a hypothetical scenario. It’s already happening — and it has a name. “Vibe coding” is the fastest-growing trend in software development: you describe what you want in plain language, and an AI tool builds it for you. No programming knowledge required.

The results look incredible. But researchers at Georgia Tech scanned over 43,000 security advisories and found 74 confirmed cases where AI-generated code directly introduced serious vulnerabilities — including command injection, authentication bypass, and server-side request forgery. In March 2026 alone, 35 new cases were identified. That is more than in all of 2025 combined.

The apps work beautifully. They are also wide open.

 

The Rise of Building Without Understanding

Vibe coding is exactly what it sounds like. You describe what you want, and an AI builds it for you. No programming experience required. Tools like Claude Code, Cursor, and GitHub Copilot have made it possible for anyone — marketers, HR managers, finance teams — to create functional applications in hours instead of weeks.

The results are impressive. And the adoption is massive. People across every department are building internal tools, automations, and prototypes without ever involving the IT or security team.

And the problem is accelerating. In the second half of 2025, the Georgia Tech team found roughly 18 confirmed vulnerabilities introduced by AI code across seven months. In the first three months of 2026, they found 56. The pattern is clear — and researchers warn that what they can detect is only a fraction of the real picture.

 

It Works — So It Must Be Safe

Here is where the human factor comes in.

When someone asks an AI to build an app and gets a polished, working product in minutes, the natural reaction is excitement. Not suspicion. The interface looks professional. The features work as expected. Everything feels right.

This is a well-documented psychological pattern called automation bias. When a machine produces something impressive, we instinctively assume it is also correct and safe. The faster and shinier the result, the less we question what is happening underneath.

The problem is not the people — it’s the gap. Vibe coding gives everyone the power to build, but no one is expected to be a security expert on top of their actual job. A marketing manager building a lead tracker or an HR team automating onboarding forms shouldn’t need to understand authentication layers or encryption protocols. But right now, nothing in the process flags what’s missing. The app looks like a finished product — so everyone treats it like one.

The same gap affects technical people too. When the AI handles the heavy lifting, even experienced developers stop verifying the details they would normally catch themselves. The responsibility shifts to a tool — but the tool was never designed to carry it.

 

Real Damage, Not Hypothetical Risk

The consequences are already visible.

One vibe-coded social platform was found with hardcoded database credentials in its public code, granting anyone full access to 1.5 million API tokens, 35,000 email addresses, and private messages — including plaintext keys to third-party AI services. The database had no access restrictions at all. It was breached within days of launch.

This wasn’t a sophisticated attack. No one hacked in. The front door was simply never locked — and no one involved in building the platform knew to check.

That’s the pattern. Vibe-coded apps that handle real data, connect to real systems, and serve real users — built and launched without anyone in the process having the security knowledge to spot what’s missing.

 

Security Cannot Wait Until After the Build

When an entire application can be created in a single afternoon, the traditional model of building first and reviewing later simply cannot keep up. By the time a security team hears about a vibe-coded tool, it may already be live, processing data, and connected to production systems.

Organizations need to shift their approach. Security awareness must extend beyond phishing emails and password hygiene. Employees building with AI need to understand that a working app is not the same as a safe app. The instinct to trust polished output must be met with the habit of questioning what is underneath.

None of this means stopping people from using AI to build things. The technology is powerful and it is not going away.

But if your organization doesn’t know who is building what, with which tools, and what data those tools can access — then you don’t have a security gap. You have a blind spot. And the next breach won’t come from a sophisticated attacker. It will come from someone who built something incredible over lunch and never thought to ask what they left unlocked.

This article was written by Cywareness, a company specializing in cybersecurity awareness.

As part of its mission, Cywareness continues to monitor emerging trends, analyze real-world attacks, and share practical insights to help organizations stay ahead in today’s evolving threat landscape.