How to Tame ‘Unleashed’ AI-Assisted Software Development

The rise of AI coding assistants in software development has introduced new vulnerabilities. Our expert offers advice for stemming the tide.

Written by Matias Madou
Published on May. 27, 2025
An angry dog with glowing eyes
Image: Shutterstock / Built In
Brand Studio Logo

By now, artificial intelligence (AI) advancements command a seemingly ubiquitous presence in the modern professional’s toolkit. And software development teams prove no exception: Eighty-six percent of companies are currently incorporating AI into the software development life cycle (SDLC), and 93 percent plan to boost their AI investments further.

These deployments are creating an increasingly uncontrolled — and even unleashed — environment, however, without internal policies or external regulations/standards providing guidelines to teams about the secure usage of AI in coding. Because these teams’ jobs inherently involve relentless productivity and constant deadlines, they’re understandably inclined to turn to third-party AI tools, sometimes without vetting them or getting approval from supervisors to use them. This dynamic gives rise to shadow AI.

This level of unmanaged AI decreases overall enterprise visibility of development processes while increasing the potential for new vulnerabilities. For example, more than five million users are turning to China’s DeepSeek. Although this number doesn’t equate to widespread enterprise use among development teams, free and easy access tools like DeepSeek may well entice developers to seek enhanced productivity and rapid feature delivery. But DeepSeek further clouds the risk management picture, with alarmingly high failure rates for malware generation, jailbreaking, prompt injection attacks, hallucinations (inaccurate or fabricated information), supply chain risks and toxicity.

4 Steps for Mitigating the Risks of AI Coding Assistants

  • Get ahead of regulatory pressures.
  • Commit to risk management.
  • Practice security-focused governance.
  • Incorporate benchmarking and results-tracking.

More From Matias MadouDo Your Developers Have Enough Time for Security Training?

 

The Problems of AI in Software Development

In fact, according to BaxBench, which has established a coding benchmark to evaluate LLMs for accuracy and security, no current AI-enabled large language model (LLM) is capable of generating deployment-ready code. What’s more, research findings indicate that the frequent use of AI tools negatively impacts critical thinking abilities, leading to a churn-it-out “factory worker” mentality rather than a measured, reflective approach that pays sufficient heed to the protection of the attack surface of products. Instead of simply opting for whichever AI assistant does the job fastest and cheapest, the critically thinking team seeks to proactively avoid the consequences of blindly trusting AI output in coding and task management.

Unfortunately, such thinking appears to be the exception rather than the rule at too many organizations. At the same time, a growing number of existing challenges are increasing the risk equation of AI in software development:

Outdated Models

Today’s enterprise security models were not designed to handle the speed and complexity of AI, and they can’t keep up with their capacity to introduce harm.

Knowledge Gaps

Organizations are not equipping developers with the skills required to apply security best practices to their coding, including how to vet products assisted by LLMs and other AI technologies.

Shadow AI

The potential for bad things to happen here is clear. In response, security leaders must lead a transition from the uncertainty of shadow AI to a more known and controlled Bring Your Own AI (BYOAI) environment.

Lack of Regulation-Based Controls

Without regulations or policies directing the appropriate use of AI, developers will expand their usage of a wide range of assistants, leading to more backdoor exploits and vulnerabilities.

 

How to Mitigate the Security Risks of AI Coding Assistants

So, how should chief information security officers (CISOs) and their teams respond? By implementing the following strategic roadmap.

Get Ahead of Regulatory Pressures

Don’t wait for some governmental body to issue “rules.” Collaborate with developer and security team members now to apply a defensive approach to LLMs and other AI solutions so that organizations maximize the benefits of these assistants while still ensuring optimal protection.

Commit to Risk Management

Everything begins and ends with the software developers themselves. So, it’s imperative to establish developer risk management as a required central component and invest in tools and ongoing, dynamic learning pathways to enhance safe coding, critical thinking and adversarial awareness. In practice, this allows teams to measure, manage and mitigate application security risk from the start of the SDLC (software development lifecycle). 

Practice Security-Focused Governance

While every organization would like to believe that its developers operate with a security-first mindset, that isn’t always the case. To help with this, it’s important to implement a proactive, organizational security-focused governance by setting policies and programmatically enforcing them. Once a program policy and goals are set, teams then need to shift focus toward upskilling, empowering developers with learning content relevant to the languages and frameworks they use daily.

Developers are not to blame for their lack of security knowledge; they have, to date, operated in an environment that has worked against them in sharing the responsibility for the security outcomes that are within their control. But modern software development — including the introduction of AI coding assistants — requires an updated security program in which developer security proficiency and skills measurement are part of a solid prevention strategy. 

Incorporate Benchmarking and Results Tracking

Benchmarking further helps cultivate a security-first mindset. It sets standards for success so that code protection from start to finish emerges as second nature when using AI. Then, to verify that risk management upskilling programs and tools are getting results, CISOs should track measurable outcomes in the form of security skill levels achieved by team members and vulnerabilities reduced. For instance, in the banking and financial services sector, this could mean evaluating how your development team adheres to industry standards (PCI-DSS, GDPR, etc.) in comparison to other organizations in the same vertical. This visibility allows organizations to reset educational and training priorities based on potential gaps and areas that require extra attention. 

It is far more beneficial for an enterprise to design developer proficiency programs with data-backed insights and benchmarking, allowing for far greater precision in addressing both vulnerabilities common to the projects being worked on and recurring gaps in knowledge and applied skill. Without this, it becomes an uphill battle to identify issues in individual developers and the wider team, not to mention remedy them with optimal learning pathways and support.

More on AI and SoftwareWhat Will the AI-Powered Future of Tech Work Feel Like?

 

Build Security Into the Workflow

Deadline pressures should not automatically translate to an out-of-control and even dangerous environment. CISOs must collaborate with developer team leaders to underscore the criticality of safeguarded software, and continuously encourage software developers to focus on enhanced risk management. Subsequently, teams discover that protection does not have to come at the cost of productivity. It can even improve it, because it reduces the need for time-consuming reworks and remediations to fix issues.

With this, organizations can come up with lessons learned and best practices, in the interest of designating standards industry-wide. Thus, they will create a universal blueprint for the optimal and secure use of AI, in which teams harness the technology within acceptable boundaries, instead of unleashing it without limits.

Explore Job Matches.