Tech Revolt

AI

Exclusive: Generative AI Drives New Cyber Threats and Risks

Exclusive: Generative AI Drives New Cyber Threats and Risks
  • PublishedJune 3, 2025

Generative AI, particularly Large Language Models (LLMs), is driving a transformation in cybersecurity. Adversaries are drawn to GenAI as it lowers the barriers to entry for creating deceptive content. Threat actors use this to enhance the effectiveness of intrusion techniques such as social engineering and detection evasion.

by Bart Lenaerts, Senior Product Marketing Manager, Infoblox

This article highlights common examples of malicious GenAI usage, including deepfakes, chatbot automation and code obfuscation. More importantly, it makes the case for early warnings of threat activity and the use of predictive threat intelligence capable of disrupting actors before they execute their attacks.

Example 1: Deepfake scams using voice cloning

At the end of 2024, the FBI warned that criminals were employing generative AI to commit fraud on a larger scale, making their schemes more convincing. GenAI tools like voice cloning reduce the time and effort required to deceive targets with seemingly trustworthy audio messages. These tools can even correct human flaws such as foreign accents or vocabulary that might otherwise raise suspicion. While creating synthetic content is not illegal in itself, it can facilitate crimes like fraud and extortion. Criminals are using AI-generated text, images, audio and videos to enhance social engineering, phishing, and financial fraud operations.

Particularly concerning is the ease with which cybercriminals can access these tools and the absence of sufficient security safeguards. A recent Consumer Reports investigation into six leading publicly available AI voice cloning tools discovered that five had easily bypassable safeguards, making it simple to clone a person’s voice without their consent.

Voice cloning technology works by analysing an audio sample of a person speaking and extrapolating it into a synthetic audio file. Without effective safeguards, anyone with an account can upload publicly available audio — for example, from a TikTok or YouTube video — and have the service imitate that voice.

Voice cloning has been used by actors in various scenarios, from large-scale deepfake cryptocurrency scams to imitating voices during individual phone calls. A recent example that attracted media attention is the so-called “grandparent” scams, where a family emergency narrative is used to persuade victims to transfer funds.

Example 2: AI-powered chat boxes

Threat actors often select their victims carefully by gathering personal insights and setting them up for scams. Initial research is used to craft smishing messages that trigger a conversation. Personalised notes like “I read your last social post and wanted to connect” or “Can we chat for a moment?” are among those our intel team has observed (step 1 in picture 2). Some messages may even include AI-modified images, but the aim remains the same: to draw the victim into a conversation on Telegram or another actor-controlled platform, away from corporate security measures.

Photo: Bart Lenaerts, Senior Product Marketing Manager, Infoblox

Once the victim is engaged on a new medium, the actor employs a range of tactics to keep the conversation going — from invitations to local golf tournaments to Instagram follows or AI-generated images. These bot-driven conversations can span weeks and include additional steps, such as requests for a thumbs-up on YouTube or a social media repost. The objective is to assess how the victim responds. Over time, the actor establishes goodwill and creates a fake account, gradually increasing the apparent funds in it with every positive reaction. Eventually, the actor requests a small investment, promising returns of more than 25 per cent. When the victim seeks to collect their supposed profits (step 3 in picture 2), the actor demands access to their cryptocurrency account, exploiting the established trust before stealing the funds.

Though these conversations are time-intensive, they yield substantial rewards for the scammer and can result in tens of thousands of dollars in illicit gains. By using AI-powered chat boxes, actors have found an effective way to automate these interactions and scale their operations.

InfoBlox Threat Intelligence actively tracks these scams to improve intelligence production. Common traits found in malicious chat boxes include:

  • AI-generated grammar errors, such as unnecessary spaces after full stops, or references to foreign languages

  • Use of fraud-related vocabulary

  • Forgetting details from earlier conversations

  • Mechanically repeating messages due to poorly trained AI chatbots (known as parroting)

  • Making illogical requests, like asking whether you’d like to withdraw funds at inappropriate moments

  • Sharing fake press releases posted on malicious websites

  • Initiating conversations with widely used phrases to lure victims

  • Promoting cryptocurrency types favoured within criminal communities

These identifiable patterns help threat researchers track emerging campaigns, link them to actors and uncover their infrastructure.

Example 3: Code obfuscation and evasion

Threat actors are increasingly using GenAI not just for creating human-readable content, but also for obfuscating their malicious code. Several media outlets have reported on how GenAI assists actors in concealing malware. Earlier this year, Infosecurity Magazine detailed how researchers at HP Wolf uncovered social engineering campaigns distributing VIP Keylogger and 0bj3ctivityStealer malware, both of which involved malicious code embedded within image files. To improve the efficiency of their campaigns, actors are repurposing and combining existing malware via GenAI to bypass detection. This also allows them to launch attacks faster and reduces the skill level required to build infection chains.

HP Wolf estimates an 11 per cent increase in evasion for email threats, while other security vendors such as Palo Alto Networks claim GenAI flipped their own malware classifier verdicts into false negatives 88 per cent of the time. It’s clear that threat actors are making significant strides in their AI-driven evasion techniques.

The case for modernising threat research

As AI-driven attacks create new detection and evasion challenges, defenders need to move beyond traditional tools such as sandboxing and post-incident forensic indicators to generate effective threat intelligence. One major opportunity lies in tracking pre-attack activities, rather than reacting to the final payload with delayed sandbox analysis.

Much like software development lifecycles, threat actors progress through multiple stages before launching attacks. They begin by creating or generating new variants of malicious code using GenAI, followed by setting up infrastructure such as email delivery systems or covert traffic distribution networks — often involving domain registrations or even the hijacking of legitimate domains.

Only then do attacks move into ‘production’, meaning the domains are weaponised and ready to deliver malicious payloads. It’s at this point that traditional security tools attempt to detect and stop the threat, typically at endpoints or network egress points within the customer environment. Due to GenAI’s ability to mimic legitimate activity and alter payloads dynamically, detection at this stage is increasingly unreliable.

The value of predictive intelligence based on DNS telemetry

To stay ahead of these rapidly evolving threats, organisations should leverage predictive intelligence based on DNS telemetry. DNS data plays a vital role in identifying malicious actors and their infrastructure before an attack occurs. Unlike payloads, which can be disguised or altered by GenAI, DNS data remains transparent and consistent across multiple stakeholders — including domain owners, registrars, servers, clients and destinations — and must be accurate to maintain connectivity. This inherent integrity makes DNS an invaluable source for threat research.

Moreover, DNS analytics offer another key advantage: malicious domains and DNS infrastructures are often registered and configured well before an attack is launched. By monitoring new domain registrations and DNS records, organisations can track the development of malicious infrastructure and gain early visibility into attack planning stages. This proactive approach enables threat identification before campaigns are activated.

Conclusion

The evolving intersection of AI and cybersecurity presents significant challenges. However, with the right strategies — including predictive intelligence derived from DNS telemetry — organisations can outpace GenAI-enabled threats and avoid becoming patient zero.

Written By
Admin

Leave a Reply

Your email address will not be published. Required fields are marked *