Sat. Mar 7th, 2026

IT teams ill-equipped to stop rogue AI agents


Krisha Rajagopal, Akati Sekurity

Autonomous systems represent an attack surface existing service models aren’t designed to protect against

Pro

Krisha Rajagopal, Akati Sekurity


AI agents are involved in 40% of insider cyber security threats, according to a report by managed security service provider Akati Sekurity.

Non-human identities currently outnumber humans 144 to one in the average business and constitute an attack surface IT teams, service providers and vendors are ill-equipped to defend, Akati CEO Krishna Rajagopal told website Channel Dive.

“[Partners] are focused on making sure that the LLMs are secure and doing an assessment, looking at the security of the MCP server. But there is this little worm – literally the agentic agent – that can [go] rogue, and if that goes rogue, most MSPs and MSSPs currently do not have an answer for,” Rajagopal said.

 
advertisement


 

Akati’s insider angle put an in-house spin on AI-based cyber security threats. Threat actors’ use of generative AI to conduct phishing and social engineering at scale is well known. Akati warned that cybercriminals will exploit the agents within a business.

“If you’ve got a GenAI implementation with GPUs running in the cloud, they want to piggyback on that and use it to run their own queries,” Rajagopal said.

A cyber espionage campaign that Anthropic foiled last autumn could be a foretaste of supply chain attacks that use AI platforms as a Trojan horse. A state-affiliated group broke into Claude’s AI coding agent and attempted to breach more than two dozen organizations that use the LLM.

Rajagopal said the hackers were running a proof-of-concept operation.

“I think they were testing the water to see what they could potentially do and how big they can go, and [at what] scale and speed, should they pull out another SolarWinds-type of supply chain attack,” he said.

A 2020 SolarWinds breach devastated MSPs that used the platform for IT management and remote monitoring. As MSPs and MSSPs develop agents for internal teams and customers, they need to be vigilant.

Existing service models have never accounted for non-human identities, nor have the software vendors that support security operations centers, Rajagopal cautioned. User behaviour analytics must evolve into agent behaviour analytics.

“MSSPs have always been focusing on protecting and supporting the organization from employees,” Rajagopal said. “Our pricing model has been per-employee, per-device; it’s always been focused on a human. But with this explosion of non-human, my message is that for MSPs and MSSPs, right now is the time where you have to equip yourself on two fronts.” 

Akati outlined a 12-month roadmap – which the company itself undertook – for mitigating rogue agent threats. 

Partners in the first 30 days should make a full inventory of all non-human identities in their organisation, audit the agents with high privileges and set up blocklist prompts. They should, in the next 60 days, deploy a pipeline for agent decision logging, develop incident response procedures for rogue agents and give agents just-in-time access.

Rajagopal urged service providers to familiarize themselves with the MITRE Atlas Framework, which charts how insider threats of the future may stem from human trust in AI systems rather than human malice.

“I think this attack chain is going to blow up, and you’re going to see more of it in 2026,” Rajagopal said.

Cybersecurity Dive

Read More: AI Artificial Intelligence cyber security GenAI security


Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *