Why AI Coding Tools Are Your Security Team’s Worst Nightmare

AI coding tools are now the norm for most developer teams, however, security measures have failed to keep up. Here’s why that could be a significant problem and what to do about it.  

Written by Marc Frankel
Published on Oct. 15, 2025
Developer using an AI coding tool
Image: Shutterstock / Built In
Brand Studio Logo
REVIEWED BY
Brian Nordli | Oct 09, 2025
Summary: AI coding tools like GitHub Copilot boost productivity but pose major security risks. Experts warn of phantom dependencies, vulnerable code, and supply chain exposure. Without AI governance and validation, organizations face unseen threats and mounting technical debt.

GitHub Copilot has exploded to 1.8 million paid subscribers. Stack Overflow's latest survey reveals that 84 percent of respondents are currently using or plan to use AI tools in their development process, with over half of developers utilizing them daily. However, beneath this productivity revolution, a security crisis is brewing that most organizations have yet to address.

The disconnect between AI adoption and security preparedness has reached critical mass. Under what other circumstances would you allow a capability with minimal vetting to touch your code? And yet, this is the reality for most organizations using AI coding tools. Any company using AI-based coding tools without governance in place for provenance, contributors, support and licensing is exposing itself to considerable risk.

4 Tips to Improve AI Coding Security

  1. Establish clear policies
  2. Implement AI-specific inventories
  3. Create processes for validation
  4. Balance security with productivity

This isn’t theoretical. Real enterprises are discovering hundreds of previously hidden AI-generated dependencies in their production systems. Security teams are finding phantom packages that don't exist in any vulnerability database. And legal departments are waking up to the reality that some AI-generated code might not even belong to them.

 

Security Assumptions That No Longer Hold for AI Coding

Traditional software development rested on fundamental assumptions that AI coding assistants have shattered overnight. Code reviews assumed human comprehension. Dependency management assumes traceable packages. License compliance assumed clear ownership. AI complicates every one of these assumptions.

Consider what happens when a developer accepts an AI suggestion for a utility function. The AI might recommend a library that seems perfect — it compiles, passes tests and solves the immediate problem. But that library could be outdated, abandoned, or worse, a hallucinated package name that doesn't actually exist. When developers install these phantom dependencies to make the code work, they create security blind spots that no scanning tool can catch.

The behavior shift is profound. Developers who would carefully vet a Stack Overflow answer now accept AI suggestions with minimal scrutiny. There's a real temptation to just “trust the AI.” Anyone who has used standard AI tools, such as ChatGPT, for non-coding tasks is aware of the potential for hallucinations, inaccuracies and mistakes. Yet the temptation is for coders to relinquish a coding task to AI and assume it's correct if it performs properly.

More on Software DevelopmentHow to Reshape the Developer Hiring Process for the AI Era

 

3 Categories of Risk Your Security Tools Can’t See

1. The Phantom Dependency Problem

AI coding assistants trained on millions of code repositories sometimes suggest packages that either don't exist or reference deprecated libraries with known vulnerabilities. Unlike traditional open-source risks, where you can at least scan for known vulnerabilities, these AI-suggested components exist in a risk vacuum.

A recent industry investigation found that AI coding assistants routinely suggest code that incorporates hallucinated packages, or software packages that do not exist, creating supply chain risks. Researchers observed that up to 21 percent of package suggestions from open-source AI models and around 5 percent from commercial models referenced non-existent dependencies, which attackers can weaponize by publishing malicious packages under those names. Developers had created local implementations to make the code compile, essentially building their versions of what the AI had hallucinated. These custom implementations bypassed all security reviews because they weren’t recognized as external dependencies.

AI tools often recommend libraries or APIs that are outdated, insecure, or otherwise wouldn’t be recommended by a human. There's also the risk of unintentional vulnerabilities — such as hardcoded secrets or insecure defaults — that an AI coding tool may introduce that a human would generally know not to.

2. The Vulnerable Code Generation Problem

AI coding assistants don't just suggest existing libraries, they generate new code that can introduce critical vulnerabilities. AI models trained on millions of code repositories often replicate the same security flaws found in their training data.

AI-generated code often contains SQL injection vulnerabilities, hardcoded secrets, insecure authentication patterns, and outdated security functions. A recent analysis found that AI coding assistants suggest vulnerable code patterns 40 percent more often than secure alternatives, simply because vulnerable code appears more frequently in training data sets.

Even worse, developers often trust AI-generated code more than human-written code, assuming the AI knows better. This creates a false sense of security, as dangerous vulnerabilities can slip through code reviews because the code appears professionally written and functional.

3. The Geopolitical Supply Chain

Imagine being a principal defense contractor, discovering that developers had been using AI coding assistants with models trained by contributors from OFAC-sanctioned countries. The generated code had been integrated into classified systems for over 18 months before the discovery, requiring expensive remediation and potential security reviews across multiple programs.

 

Why Your Current Security Approach Is Failing

Traditional application security tools are built on the assumption that code has clear provenance. Static analysis tools scan known patterns. Software composition analysis identifies documented packages. But AI-generated code operates in an entirely different dimension.

Security teams accustomed to scanning for CVEs in the National Vulnerability Database are discovering that, while there are some nascent attempts to inventory AI risk, they are not listed in conventional vulnerability databases. They’re novel combinations, obscure packages the AI remembered from training data, or entirely hallucinated components that developers implement locally to make things work.

The review process itself is compromised. Code reviews, linters and traditional quality assurance all assume human comprehension. AI-generated code can appear correct but hide logical flaws. Following the logic of an AI coding tool and understanding all of its references and syntax can be non-trivial, mainly when AI generates hundreds of lines of seemingly functional code.

 

A Practical Framework for AI Coding Governance

The solution isn’t to ban AI coding tools — that ship has sailed. Like the adoption of any other technology or capability, we need to establish a governance process and policy. Here's what I recommend organizations implement:

1. Establish Clear Policies 

Which countries are acceptable from a model contributor perspective? Which AI companies do we, as an organization, trust? Which AI licenses can we utilize legally? What is the QA process to ensure the AI-developed code is well understood and human-reviewed? Without these basics, you're trusting just about anyone or anything to touch your code bases.

2. Implement AI-Specific Inventories 

There are growing calls for AI dependency inventories (such as AI Bills of Materials or AIBOMs) to articulate AI dependencies and understand the provenance of models and datasets. Without it, security and engineering teams are effectively operating blind, and we’re only one instance away from catastrophe if an AI coding tool is found to have been tampered with.

3. Create Processes for Validation 

Establish processes to validate that your policies are followed and consistently monitored. This includes automated scanning that looks explicitly for AI-generated patterns, phantom dependencies and license conflicts.

4. Balance Security with Productivity 

Once you have your security controls in place, enjoy the fruits of the AI labor and the increase in engineering velocity. The goal isn't to eliminate AI coding tools but to use them responsibly.

 

The Problem is Only Getting Bigger

The best time to inventory your AI dependencies was three years ago. The second-best time is now.

Government agencies are already demanding AI Bill of Materials (AIBOM) inventories from defense contractors. Boards are demanding AI governance frameworks from security teams. The EU AI Act requires transparency in high-risk AI systems. The regulatory window for proactive preparation is closing rapidly.

Organizations that wait will inherit a security nightmare they may never fully untangle. Imagine trying to audit three years of AI-assisted development without any tracking of which code was AI-generated, which models were used, or what vulnerabilities were introduced. The technical debt isn't just in the code—it's in the complete absence of governance around how that code came to exist.

The competitive pressure to maintain AI-enhanced productivity while managing security risks will separate market leaders from those scrambling to respond to the first major AI coding security incident. And that incident is coming — the only question is whether your organization will be the cautionary tale or the case study in preparedness.

More on CybersecurityHow to Prevent Piggybacking Attacks on Your Network

 

The Path Forward for AI Coding Tool Security

Engineering teams will continue to adopt AI coding tools at an exponential rate. The productivity gains — faster prototyping, reduced manual work, and increased engineering velocity — are too significant to ignore. But the organizations that thrive will be those that recognize the fundamental shift these tools represent and adapt their security posture accordingly.

The competitive pressure to maintain AI-enhanced productivity while managing security risks will separate market leaders from those scrambling to respond to the first major AI coding security incident. And that incident is coming—the only question is whether your organization will be the cautionary tale or the case study in preparedness.

Establishing processes now is the only way to gain control over this. The tools exist. The frameworks are emerging. The choice isn't between productivity and security—it's between managed risk and blind faith in AI that was never designed to be trustworthy.

Government agencies are already demanding AIBOM inventories from defense contractors. Boards are already demanding AI governance from security teams. The window to begin accounting for AI dependencies is closing rapidly, and those who wait will create a security nightmare that may never be fully untangled.

The only question is whether your organization will implement proper governance before discovering the hard way what happens when AI-generated code goes wrong. The window for proactive action is closing rapidly. Those who act now will have a competitive advantage. Those who wait will inherit a security nightmare they may never fully untangle.

Explore Job Matches.