Tech Revolt

AI

Exclusive: AI Agents to Cause 25% of Enterprise Breaches by 2028

Exclusive: AI Agents to Cause 25% of Enterprise Breaches by 2028
  • PublishedJune 26, 2025

As organisations increasingly invest in tailored generative AI applications for enterprise automation, AI agents are emerging as pivotal components in digital transformation strategies. These agents, whether operating autonomously, semi-autonomously or within multi-agent systems, harness artificial intelligence to perceive, make decisions and carry out actions to achieve a variety of goals.

By Avivah Litan, Distinguished VP Analyst at Gartner

While AI agents promise notable advancements, they also introduce fresh risks alongside existing threats posed by AI models and applications. Gartner predicts that by 2028, 25% of enterprise breaches will be linked to AI agent abuse, stemming from both external attackers and malicious insiders.

The exponential growth of the currently invisible attack surface created by AI agents necessitates the development of advanced security and risk management strategies. This heightened exposure is expected to attract bad actors from outside the organisation as well as internal threats, prompting enterprises to act swiftly in implementing robust controls to mitigate potential risks.

To address these challenges effectively, organisations must prioritise identity governance and administration encompassing both human and non-human identities. This involves isolating sensitive content and data from AI processes and entities that should not have access. Additionally, enterprises should explore emerging solutions from specialist vendors offering runtime data protection — providing contextual, dynamic access management and data classification while enforcing least-privilege access policies. These approaches should complement existing identity and access management, as well as information governance frameworks, to safeguard enterprise data and system access.

As AI agent activity intensifies, organisations failing to secure these operations will become increasingly vulnerable to hackers and malicious insiders exploiting the expanding, unprotected threat surface.

To prepare for the growing presence of AI agents, enterprises should invest in educating staff on the specific risks associated with these technologies, which are becoming ever more embedded in enterprise products. It is advisable to adopt either homegrown or third-party tools to manage AI agent risks, meeting three key requirements:

  • Provide all relevant personnel with a comprehensive overview and mapping of agent activities, including processes, connections, data exposure, information flows and outputs, to identify anomalies.

  • Detect and flag irregular AI agent behaviours and those that breach pre-established enterprise policies.

  • Autonomously remediate flagged anomalies and attacks in real time, as human oversight cannot scale to meet the demands of AI operations. Human teams should, however, review exceptional cases to determine appropriate action.

Moreover, enterprises must extend end-user behaviour monitoring and analysis capabilities to detect and alert on unusual activity originating from AI agents, including unauthorised collaboration with external entities.

By taking these proactive measures, organisations can effectively manage the risks posed by AI agents, ensuring the resilience and security of their digital transformation initiatives.

Written By
Admin

Leave a Reply

Your email address will not be published. Required fields are marked *