The average cost to an organisation of API-related security incidents is pegged at $700k per year, according to Akamai.
In its latest API Security Impact Study for 2026 [email wall], the company claims the issue is among the top three cited concerns among those it surveyed (1,840 security leaders in 10 countries). That’s perhaps unsurprising, given the enterprise’s enthusiasm to use AI agents that, via intermediary deterministic software, connect multiple code and data instances. The necessary oversight of new autonomous systems has meant teams following crumb trails as part of their investigations into rogue AI activity will discover potential holes in their API portfolios. No doubt, too, they unearth a whole host of other issues along the way, which Akamai describes as zombie, rogue, and shadow APIs.
Writing on the Akamai blog, Barney Beal says, “This isn’t just a technical glitch; it’s a systemic governance failure.” Whether or not there’s “a rush to take advantage of new AI advancements, businesses are inadvertently creating a sprawling, unmapped attack surface”, per Beal, it’s undeniable that many organisations do indeed have no full oversight of every element of every API in every piece of software.
The issue is the latest manifestation of an older problem in which development teams – and now, those behind agentic automation rollouts – can run ahead of those tasked with cybersecurity. It’s a fallacy to suggest that somehow, security experts can absorb every line of code produced everywhere in the business and vet its sanity from cyber’s point of view. Similarly untrue is that software developers can dual-track their careers and also become the overnight cybersecurity experts of the business.
Given the twin tracks of wishful thinking (developers need to be cyber experts, cyber experts need to be developers) proving to be just that, security teams have used WAFs (web application firewalls) as a way to at least ensure some form perimeter protection for APIs. But, perhaps not in the way that Beal posits: “Organizations treated API security as a perfunctory compliance requirement on a WAF requirement list.”
According to the survey, the number of organisations which APIs could return sensitive data has fallen to 23%. There are thousands of new calls to the average 5,900 APIs in the enterprise (per Akamai) stemming from agentic AI implementations, and those thousands somehow, in the reader’s mind, equate agentic AI instances with bad actors. Perhaps of more concern should be new API endpoints being created quickly and with little oversight by developers under KPIs that reward the more-code-more-quickly mentality.
The Akamai line is that security testing specific to APIs needs to be part of the CI/CD process, and it has a tool to automate that, one that’s capable of helping prevent common Broken Object Level Authorization issues, for example.
There’s some pressure on business units to get on board with automation and the agentic narrative, and a keen interest in departments keen to track down the reasons why their agentic workflows may not be working at scale.
Should cybersecurity teams ensure that even the most laxly vibe-coded deterministic LLM wrappers can work, most of the time, with APIs (security isn’t a roadblock to the inexorable march of agentic AI)? Or, by bolstering API security, is cybersecurity preventing the rest of the business from progressing down its automation journey (the more forgiving the security posture, the better the business outcomes)?
Beale says, “When the need for speed in AI deployment overshadows security, DevSecOps teams see the cracks first.” True. The blog post calls for the testing of APIs that goes beyond the purely functional, and we can infer that embedded automated testing in CI/CD pipelines is part of the answer.
Ben Beale says the failure of API security is systemic. It’s undeniably a systemic issue, but perhaps it could be considered as not one that stems from DevOps or DevSecOps’s operations. Instead, at its heart are the decisions to get on board with agentic workflows too often coming withot any deep knowledge of how the nuts and bolts of the enterprise’s digital systems work. Marketing messages from agentic AI vendors promise overnight overnight efficiency gains by deploying a few robots, but doing so safely and reliably requires a much longer timescale than is currently fashionable.
(Image source: Pixabay, under licence.)
You can catch Akamai on the show floor, stand 108, at the Edge Computing Expo, part of TechEx North America, May 18-19, at the San Jose McEnery Convention Center, CA.
Want to dive deeper into the tools and frameworks shaping modern development? Check out the AI & Big Data Expo, taking place in Amsterdam, California, and London. Explore cutting-edge sessions on machine learning, data pipelines, and next-gen AI applications. The event is part of TechEx and co-located with other leading technology events. Click here for more information.
DeveloperTech News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.


