As AI tools rapidly multiply across industries, a new kind of rivalry is taking shape — not between companies and humans, but between AI systems themselves. Businesses are now deploying AI not just to optimise operations, but to outsmart competing AI models. From algorithmic trading bots clashing in financial markets to recommendation engines battling for consumer attention, this digital duel is quietly transforming competitive strategy.
In an era marked by real-time data and algorithmic agility, AI has become more than just a tool for automation. It is now a competitive weapon. Today, companies are actively pitting one AI system against another — with each machine learning model designed not merely to function, but to outperform the rival systems that stand in its way. From advertising platforms leveraging AI to dominate attention spans, to cybersecurity firms building AI that anticipates adversarial AI-led threats, this arms race is not only accelerating but becoming more autonomous and less visible to human observers.
by Kasun Illankoon, Editor-in-Chief at Tech Revolt
Cybersecurity: Defensive AI vs Offensive AI

Few sectors illustrate this AI arms race more vividly than cybersecurity. According to Samer Diya, Vice President for Forcepoint META, “We’re using AI to safeguard AI. Our Forcepoint Data Security Cloud is driven by AI Mesh technology — a decentralised network of fine-tuned small language models designed to assess risky data in real-time.” These systems don’t just detect threats. They predict, contextualise, and adapt to adversarial input, often generated by AI itself.
As Diya points out, today’s threats are not static. They are fast, intelligent, and constantly evolving. Forcepoint’s behavioural analysis approach moves away from traditional, rule-based security systems and embraces an adaptive model that stays ahead of threats. Yet, the reliance on AI brings its own set of challenges. “Cyber threats are becoming so sophisticated that AI-on-AI confrontations are likely to operate at a speed and complexity beyond what humans can manage alone,” Diya adds. “But that doesn’t mean humans should be removed from the equation. In fact, the opposite is true.”
Ivan Milenkovic, VP of Cyber Risk Technology at Qualys, expands on this: “AI-on-AI confrontations are indeed occurring in critical areas such as intrusion detection and vulnerability management. As defensive AI systems become more advanced, threat actors increasingly leverage AI-driven offensive techniques to bypass or undermine these defences.”

Qualys addresses this evolving threat landscape through a combination of predictive threat intelligence and adversarial training. Their TotalAI module focuses on protecting LLMs from attacks such as prompt injection and model theft, mapping threats to the OWASP Top 10 framework for LLMs. In addition, Milenkovic notes, “The key lies in proactive threat intelligence — shifting from reactive to predictive approaches, leveraging massive datasets to detect patterns before attackers strike.”
These developments highlight a grim reality: in cybersecurity, the attacker and defender are increasingly both machines. As platforms become more autonomous, the line between offensive and defensive AI grows blurrier — and far more dangerous.
Marketing: AdTech’s AI Faceoff
The battlefield isn’t limited to network perimeters. In the advertising world, AI systems are clashing across platforms to win visibility and ROI. Inna Weiner, AVP Product at AppsFlyer, notes, “The ‘AI vs AI’ battle is already happening. Fraudsters are using advanced AI tools to create fake clicks, fake installs, and even full transaction flows designed to confuse attribution models and siphon ad budgets.”

AppsFlyer’s Protect360 tackles this with a multi-layered, AI-driven fraud detection system that analyses behavioural signals and contextual anomalies. But the stakes go beyond fraud. In a world of machine-run auctions and AI-powered media buying, platforms like AppsFlyer are arming their clients with predictive attribution models, helping marketers place smarter, more strategic bids in an ecosystem teeming with competing algorithms.
“Transparency and accuracy are built into our DNA,” Weiner adds. “Our Core AI acts as a data refinery, cleansing and validating billions of signals. Every recommendation is traceable. There are no blind spots.” That level of assurance is critical in a competitive ecosystem where every marketing dollar is contested — not by humans, but by models locked in constant optimisation loops.
Enterprise Platforms: Workflow Optimisation and Algorithmic Interoperability

The race isn’t just external — it’s internal too. Within large organisations, different AI models deployed across departments often work at cross purposes. Saran B. Paramasivam, Regional Director MEA at Zoho, describes the issue succinctly: “When AI applications operate in isolation across departments like sales, marketing, and customer support, data silos and inconsistent context often lead to conflicting actions and inefficiencies.”
Zoho addresses this by creating a context-sharing infrastructure across its platform. AI assistants draw from shared data sources to deliver coordinated, consistent responses. This model eliminates friction while boosting operational precision, a strategy especially vital in fast-transforming markets like the Middle East. “By allowing businesses to choose between SLMs, MLMs, or LLMs based on specific use cases, we ensure flexibility while maintaining control and compliance,” says Paramasivam.
Antonio Rizzi, Area VP of Solution Consulting at ServiceNow, sees a similar trend: “Conflicts among AI systems within enterprise environments typically emerge around knowledge management strategies. There’s often tension between centralised repositories and workflow-integrated knowledge that enhances dynamic processes.”

ServiceNow’s solution is an AI Control Tower — a unified command centre that governs AI agents, enforces compliance, and facilitates human oversight. Rizzi adds, “Over the next two years, we expect AI-to-AI interactions to evolve into tightly coordinated multi-agent ecosystems. It will require robust governance tools that align AI activity with business strategy.”
Financial Services: Defensive AI Meets Customer AI

Financial institutions are rapidly arming themselves for the AI era. Chris Shayan, Head of AI at Backbase, offers a front-row view: “Challenger banks and fintechs are setting a high bar by leveraging AI to anticipate customer needs in real time… but this progress brings heightened exposure to industrialised scams, synthetic identities, and mule activity.”
Backbase uses an Agentic AI Automation Platform to monitor customer journeys and operational workflows, embedding fraud detection like behavioural biometrics within each experience. The company’s Intelligence Fabric allows banks to integrate best-of-breed models seamlessly, whether for personalisation, security, or both.
“It’s not just about having powerful AI; it’s about having the right AI,” Shayan explains. “We don’t just hand over software. We embed our AI experts directly with clients. It ensures our clients aren’t just buying AI — they’re building sustainable AI capabilities within their own organisations.”

Sid Bhatia, Area VP & GM for Dataiku in the Middle East, Turkey & Africa, echoes the sentiment. “AI governance is mission-critical. As enterprise AI interacts more frequently with other AI systems, sometimes competitively, governance should ensure accountability, traceability, and human-in-the-loop control.”
Dataiku’s platform supports these principles through features like version control, role-based access, and robust audit trails. It’s not just about building smarter models; it’s about embedding them into a governance-first architecture that remains resilient under pressure.
AI Governance: The Hidden Infrastructure of Competition

As AI-to-AI interactions scale, the conversation inevitably returns to control. Thys Bruwer, Consulting Lead for Data & Analytics at DXC Technology, outlines the stakes: “Even the most autonomous agentic systems must keep humans in the loop when decisions carry legal, ethical, or safety implications.”
DXC’s approach includes setting up centralised AI Offices and embedding compliance into every stage of deployment. From co-creating intelligent agents with enterprise clients to partnering with Microsoft on Security Copilot, their strategy underscores a foundational principle: AI competitiveness cannot come at the cost of transparency.
“Our AI is always learning and recalibrating — not just to data, but to the behaviour of other intelligent systems operating in the same ecosystem,” Bruwer adds. This continuous adaptation is the cornerstone of staying ahead in an environment where rival AI may change strategy in milliseconds.
This also extends to the notion of interoperability. Zoho’s Paramasivam stresses the importance of giving businesses the freedom to choose between different types of models — from small to large language models — based on context and compliance. This modularity is not just a luxury but a necessity in regions like the GCC, where regulatory landscapes evolve quickly.
The Next Phase of AI Rivalry
From cybersecurity perimeters to customer touchpoints, AI is now clashing with AI in ways that are often invisible but deeply consequential. The corporate arms race is no longer metaphorical. It is built on layers of machine learning models, each trying to outperform, outbid, outsmart, or outmanoeuvre another.
And this race has only just begun. In the coming years, as interoperability standards evolve and AI Control Towers become the norm, companies will need to navigate not just how their AI systems perform, but how they coexist — or conflict — with others. Those that build their strategies around transparency, ethical design, and cross-domain coordination will not only survive the arms race, but lead it.
The future may belong to the machines. But the rules, for now, are still ours to write.