Tech Revolt

AI Big Tech

AI Agents will Accelerate Account Takeovers and Social Engineering Attacks

AI Agents will Accelerate Account Takeovers and Social Engineering Attacks
  • PublishedMarch 20, 2025

By 2027, AI agents will reduce the time required to exploit account exposures by 50%, according to Gartner, Inc.

“Account takeover (ATO) remains a persistent attack vector because weak authentication credentials, such as passwords, are gathered through various means, including data breaches, phishing, social engineering and malware,” stated Jeremy D’Hoinne, VP Analyst at Gartner. “Attackers then leverage bots to automate a barrage of login attempts across multiple services in the hope that the credentials have been reused on different platforms.”

AI agents will enable automation for more steps in ATO, from social engineering based on deepfake voices to end-to-end automation of user credential abuses.

As a result, vendors will introduce products across web, app, API and voice channels to detect, monitor and classify interactions involving AI agents.

“In the face of this evolving threat, security leaders should expedite the transition towards passwordless, phishing-resistant MFA,” stated Akif Khan, VP Analyst at Gartner. “For customer use cases where users have a choice of authentication options, educate and incentivise them to migrate from passwords to multi-device passkeys where appropriate.”

Defending Against the Rise and Expansion of Social Engineering Attacks
Alongside ATO, technology-enabled social engineering will also pose a significant threat to corporate cybersecurity. Gartner predicts that by 2028, 40% of social engineering attacks will target executives as well as the broader workforce. Attackers are increasingly combining social engineering tactics with counterfeit reality techniques, such as deepfake audio and video, to deceive employees during calls.

Although only a few high-profile cases have been reported, these incidents have highlighted the credibility of the threat and led to substantial financial losses for victim organisations. The challenge of detecting deepfakes remains in its early stages, particularly when applied to real-time person-to-person voice and video communications across multiple platforms.

“Organisations will need to stay abreast of the market and adapt procedures and workflows in an attempt to better resist attacks leveraging counterfeit reality techniques,” stated Manuel Acosta, Sr. Director Analyst at Gartner. “Educating employees about the evolving threat landscape through training specific to social engineering with deepfakes is a key step.”

Written By
Admin

Leave a Reply

Your email address will not be published. Required fields are marked *