MSSP, Managed Security Services, Generative AI, API security

Generative AI Fueling Rapid Rise in API Vulnerabilities: Wallarm

APIs

APIs, the software that plays the vital role in modern business by allowing different applications to communicate, are now the most critical attack surface, and much of that is being driven by the rapid adoption of generative AI, according to API security vendor Wallarm.

Wallarm researchers in 2024 tracked 439 AI-related vulnerabilities – a 1,025% jump year-over-year – and 99% of those were tied to APIs, the San Francisco-based company wrote in a report released Wednesday. The CVEs included injection flaws, misconfigurations, and new memory corruption vulnerabilities, tied to AI’s reliance on high-performance binary flaws, they wrote.

The annual report hits on both the technical and strategic aspects of API security that are important to both CISOs and CIOs. This is where MSSPs and MSPs can play an important role, according to Wallarm CEO Ivan Novikov.

“In many ways, the biggest cybersecurity change that comes with the massive adoption of APIs is shifting from securing assets to securing connections,” Novikov told MSSP Alert. “APIs are the connective tissue between enterprise services and applications. While a basic advantage of using an MSSP is being able to tap into their pool of security skills, MSSPs also tend to understand the connections between things really well. MSSPs are well versed in connecting disparate security tools and security data. That experience lends itself well to understanding APIs and API security.”

Aggressive Attackers

In a report in December, Wallarm illustrated how aggressive hackers who are looking to gain initial access into systems and software or to disrupt services are in targeting APIs. Using an API honeypot, researchers found it took threat actors 29 seconds on average to discover a newly deployed API and less than a minute to exploit an unprotected one.

The report also showed that APIs had overtaken web applications as targets of attacks.

This week, Wallarm illustrated how AI is fueling the rapid increase in API attacks. About 57% of AI-powered APIs were externally access and 89% relied on insecure mechanisms, the authors of the latest report wrote. Only 11% had robust security measures in place, which means most endpoints were vulnerable to attacks. There significant vulnerabilities in tools like PaddlePaddle (a R&D deep learning platform from China) and MLflow (for managing the machine learning workflow) that they wrote “underpin” enterprise AI deployments.

“AI didn’t just amplify traditional risks – it created new ones,” the researchers wrote. “This convergence of AI and API vulnerabilities underscores one of the central findings of this report: AI security is API security.”

APIs a Critical Cybersecurity Risk

That said, while emerging AI technology is fueling the rapid rise of API vulnerabilities, APIs – working as the connective tissue between disparate software – already were a significant security risk, the report found. Analyzing CISA’s report about known exploited vulnerabilities (KEVs), Wallarm found that more than half of exploits in 2024 were related to APIs, a sharp increase from the 20% the year before.

“This surge reflects the central role APIs now play in everything from modern SaaS platforms to legacy web systems,” they wrote. “Among these, 33.5% targeted modern APIs like RESTful and GraphQL, while 18.9% involved legacy APIs, including AJAX backends and URL parameter-based systems.”

Security is Key

Wallarm’s Novikov said the company isn’t advocating that generative AI or the APIs that come with it should somehow be prevented. APIs are a critical part of today’s IT framework and are even more so with generative AI.

“Security teams should be looking at how to deploy these new services securely,” he said. “Addressing the risks of AI itself, like training data poisoning or exposing sensitive data in responses, is one aspect of AI security. The other is securing the APIs that support generative AI.”

Turning to MSSPs that have experience with generative AI is a way to quickly ramp up security capabilities “and allow security to be less of the department of ‘no’ and more the department of ‘how,’” he added.

This has to happen to protect businesses, according to the Wallarm researchers.

“The findings make one thing clear: Enterprises that fail to secure their APIs risk not only technical vulnerabilities, but also reputational and operational crises,” they wrote.

You can skip this ad in 5 seconds