Generative AI Reshapes Cybersecurity in the UAE as Risks Accelerate

Alexandre Depret‑Bixio, Senior Vice President EMEA & APJ at Anomali, warns that as generative AI reshapes UAE businesses, rising AI‑driven threats require tighter governance, clearer visibility and sustained human oversight.

Generative AI has become one of the most transformative technologies of the decade, reshaping how companies operate and compete. In the UAE and broader Middle East, where digital transformation and AI adoption are priorities – with digital infrastructure initiatives and advanced cloud adoption driving rapid AI deployment – this shift is especially impactful. But the same qualities that make generative AI powerful also create unprecedented security challenges. What distinguishes this moment is not just AI’s capability, but its accessibility. Barriers to misuse have never been lower.

Cybersecurity leaders must now navigate a dual reality: generative AI is expanding the attack surface while becoming woven into critical systems. In the UAE, 55% of organizations reported cyberattacks over the past year and 93% experienced AI-related security incidents, underscoring how rapidly threats are evolving with AI adoption.

Managing that tension requires understanding how AI alters both offense and defense.

1. Generative AI is supercharging traditional attacks
AI eliminates traditional constraints on attackers, enabling highly personalized phishing, convincing deepfakes and AI-assisted malware. According to an industry survey, in the UAE, 96% of organizations now deploy AI for threat detection, yet only 30% have mature security readiness and 87% report critical skills shortages, revealing a readiness gap that attackers can exploit.

These capabilities don’t create new threat categories; they amplify existing ones. Social engineering becomes more persuasive and compromise easier to execute at scale. Organizations should assume that attacks once considered advanced will soon be routine.

2. AI-Native attack vectors are emerging
Beyond enhancing traditional tactics, AI introduces new forms of exploitation such as prompt injection and model manipulation. These attacks target AI’s reasoning processes themselves – weaknesses that traditional cybersecurity frameworks were not designed to defend against.

The UAE’s cybersecurity landscape reflects such risks: over 223,800 digital assets are estimated to be exposed, many with long-unaddressed vulnerabilities, a fertile ground for AI-powered attacks.

As organizations embed AI into operations, they must acknowledge these new failure modes and build governance and defenses tailored to this terrain.

3. AI systems can behave unpredictably
Not all AI risk stems from malicious activity. Generative systems can drift from expected behavior, misinterpret inputs, or reveal sensitive patterns. This unpredictability is compounded by regional regulatory environments where data sovereignty and privacy expectations – such as those emerging across the Gulf Cooperation Council – add compliance layers to AI governance.

As AI integrates into customer interactions, automation and internal workflows, unpredictability becomes a material operational risk requiring continuous validation, monitoring and governance.

4. AI expands the attack surface in invisible ways
Every model, dataset and automated agent becomes a potential entry point. AI systems often interact with sensitive data, hold elevated privileges and make decisions autonomously, yet they may not be governed with the same rigor as human identities.

This creates challenges:

  • Machine identity sprawl: AI agents act as autonomous entities with permissions that are difficult to track.
  • Opaque supply chains: Many AI models rely on third-party APIs, increasing indirect exposure.
  • Training data exposure: Proprietary or personal data used for model training can accidentally reveal exploitable patterns.

In the UAE and wider Middle East, where digital infrastructures are rapidly expanding, this invisible attack surface is significant – organizations face tens of thousands of attempted cyberattacks daily, including ransomware, phishing and malware cases that exploit both legacy systems and newly deployed AI components.

Securing AI requires applying core principles – zero trust, least privilege and continuous monitoring – to both human and non-human actors.

5. Human expertise remains essential
While AI will automate parts of security operations, it will not replace human judgment. Analysts will evolve from manual execution to supervising AI systems, validating decisions and ensuring alignment with risk tolerance. In the UAE, critical skills shortages in cybersecurity teams make human oversight even more vital as AI systems proliferate.

Responsible AI use requires transparency, accountability and leadership oversight.

The Path Forward
Generative AI will define the next era of cybersecurity. Leaders must balance innovation with disciplined governance. That means knowing where AI is used, monitoring its behavior, preparing for AI-specific incidents and training teams to understand both opportunities and risks.

Organizations that get this right will not only defend themselves more effectively but also help shape emerging standards of digital trust – especially in regions like the UAE and Middle East, where digital leadership and cybersecurity resilience are strategic national priorities.