AI code is creating security bottlenecks faster than it’s solving them – SC Media

(Adobe Stock)
Code assistants are hardly the “10x engineers” we’ve been expecting. A recent GitLab survey of DevOps practitioners found that while more than one-third of code is now written by AI, practitioners ranked quality control and security vulnerabilities introduced by AI as the top adoption challenges.
As organizations deploy AI coding tools at scale, these problems are overwhelming their security teams. AI promised to accelerate development, but we’re creating security review bottlenecks faster than AI can improve coding efficiency.
[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
Security engineers who previously reviewed 100 lines of code per hour can now face 100,000 lines of code because AI contributed to that code. Meanwhile, attackers are already using autonomous agents to rapidly detect flaws in existing systems. As risks proliferate and infinite security backlogs grow, our capacity to defend remains constrained.

This challenge isn’t entirely new. Security processes dependent on human toil could be easily overlooked when code volumes were manageable. However, AI’s complexity is making the work of product security exponentially harder. If we don’t address these scaling challenges quickly, the window to secure AI-scale development will become trickier to close.
Here are the two compounding failures driving these bottlenecks — and how to avoid them.
The “shift left” movement aimed to address security bottlenecks by shifting security responsibility to developers earlier in the software development lifecycle. Adding security testing to development workflows sounds good in theory, but forcing developers to address security checks that often flag false positives is suboptimal. We can unintentionally add hours to their workday without considering incentives. Developers find workarounds because they need to ship features on a deadline.
For the shift left approach, we failed to consider the entire SDLC, resulting in unintended downstream effects. We’re making the same mistake with AI code assistants.
Related reading:
These assistants optimize for code generation while leaving the review process unchanged. The solution isn’t adding more people or more tools in isolation. You need to think holistically about your entire pipeline.
The organizations that avoid this trap map their value streams before adding more AI tools. This is in addition to documenting processes that rely on tacit, institutional knowledge, which complicates how teams define and measure the value AI delivers. If AI makes an undocumented process more efficient, it is impossible to measure or prove that value.
Leaders should also implement scalable review methodologies that combine AI with practical human oversight, establishing prioritization frameworks based on measurable risk. For instance, code that touches sensitive customer data or production databases requires a much more intensive review than a feature to customize an application’s theme
Traditional security frameworks assume predictable human behavior. AI agents don’t follow those rules, and the result is an entirely new class of risk.
The complexity multiplies when agents interact with other agents across organizational boundaries. When your internal agent receives instructions from a third-party agent that itself received instructions from another external system, your security model must account for potentially malicious requests operating outside your direct observation.
Avoiding these issues requires developing security controls to limit permissions and monitor agent behavior. Emerging approaches, like establishing composite identities for AI systems, can help tie AI activity to human accountability by tracking which agents performed specific actions and who authorized them.
In conjunction, fostering system design fluency within security teams can make it easier to accurately assess how a new AI implementation may impact existing security boundaries. Many security engineers today struggle to articulate how the backend of an LLM actually works, but understanding how an AI system is designed is fundamental to understanding AI security risks.  This doesn’t require deep engineering expertise for every component, but rather a basic understanding of how the pieces fit together to achieve outcomes, much like security professionals understand how web applications work.
Most organizations will spend the next two years building AI capabilities on systems they know have various flaws, because development cannot wait until we fix everything. It’s the right choice. There’s no single solution that will work for every organization to secure AI-driven development. The key is acknowledging risk and managing it strategically while working toward doing things “the right way.”
Security teams also can’t solve these failures alone. Recent research from DX shows that while 91% of developers now use AI tools and report saving three- to four hours per week, organizational dysfunction (meetings, interruptions, slow code reviews, and CI wait times) is costing teams more time than AI saves. Quality outcomes also vary among organizations, with some seeing improved change failure rates and faster delivery, while others are drowning in technical debt.
The differentiator isn’t the AI tools themselves, but the underlying engineering practices and culture. As continuous delivery expert Bryan Finster observes, “AI is an amplifier. If your delivery system is healthy, AI makes it better. If it’s broken, AI makes it worse.”
The failures are rooted in upstream problems that AI now exposes at scale. Security reviews sit at the end of this chain, inheriting every weakness that came before.
Security teams need to become advocates for the engineering practices that enable secure AI-driven development, including documented processes, a strong testing culture, and continuous delivery principles that embed security throughout software delivery. The real constraint is often the quality of what’s reaching you in the first place.
The organizations that will succeed are those that strive to address these problems now, before AI-generated code volume makes them impossible to fix.
Julie Davila, VP Product Security, GitLab.


By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds
Copyright © 2025 CyberRisk Alliance, LLC All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed in any form without prior authorization.
Your use of this website constitutes acceptance of CyberRisk Alliance Privacy Policy and Terms of Use.

source

Jesse
https://playwithchatgtp.com