AI‑Powered Security: The New Frontline of Cyber Defense

Artificial intelligence has become the defining force reshaping cybersecurity. As organisations across the Middle East and Africa accelerate digital transformation, expand cloud adoption, and integrate AI into business operations, the attack surface has grown dramatically. At the same time, adversaries are weaponising AI to automate reconnaissance, craft hyper‑realistic phishing campaigns, generate polymorphic malware, and exploit vulnerabilities at machine speed. The result is a rapidly evolving battlefield where defenders must match AI‑driven attacks with AI‑driven defense. Across the cybersecurity industry, experts agree that AI is no longer optional; it is foundational to modern security strategy. From reducing dwell time to securing AI models and data pipelines, organisations are rethinking their entire security architecture. This feature brings together consolidated insights from leading cybersecurity voices across the region, each offering a unified perspective on how AI is transforming detection, response, and resilience.

Ilyas Mohammed, COO at AmiViz

Ilyas Mohammed, COO at AmiViz, explains that AI‑driven detection and response is dramatically reducing dwell time by automating threat identification and prioritising alerts based on risk. AI models analyse behavioural patterns across networks, endpoints, and cloud environments, detecting stealthy threats far earlier than traditional tools. Automated playbooks integrated with SOAR and EDR platforms enable real‑time containment, allowing teams to isolate compromised assets and remediate incidents instantly. Mohammed warns that AI‑generated attacks increase the scale and sophistication of threats, enabling adversaries to craft convincing phishing emails, automate vulnerability discovery, and generate polymorphic malware. Deepfakes and synthetic identities heighten fraud risks, requiring layered defenses that combine AI‑powered detection, strong identity controls, continuous monitoring, and employee awareness. He stresses that securing AI models and data pipelines demands security‑by‑design, strict access controls, encryption, data integrity checks, adversarial testing, and integration of AI systems into SIEM and SOC processes to prevent tampering and ensure resilience.

Morey Haber, Chief Security Advisor at BeyondTrust

Morey Haber, Chief Security Advisor at BeyondTrust, highlights how AI compresses dwell time by automating correlation, intuition, and threat‑hunting tasks that once required extensive human effort. AI engines model identity behaviour, endpoint activity, and network telemetry in real time, surfacing anomalies that indicate compromise, including privilege misuse and lateral movement. When confidence is high, AI can isolate accounts, revoke access, or quarantine systems before analysts intervene. Haber warns that AI‑generated attacks introduce unprecedented speed, accuracy, and targeted precision, overwhelming legacy defenses. Threat actors automate phishing, deepfakes, vulnerability discovery, and malware customisation, exploiting trust and human behaviour. He argues that mitigation requires defenders to adopt AI as aggressively as attackers, combining identity‑centric security, least‑privilege enforcement, behavioural analytics, and rapid response automation. Securing AI models and pipelines, he adds, requires treating them as critical infrastructure with strong identity controls, secrets management, integrity checks, and protection against data poisoning and prompt injection.

Ram Narayanan, Country Manager, Check Point Software Technologies, Middle East

Ram Narayanan, Country Manager, Check Point Software Technologies, Middle East, says organisations are combining AI‑driven detection with exposure management to improve visibility across the entire attack surface. AI autonomously analyses behaviour across networks, cloud workloads, and user environments, detecting anomalies early while exposure management highlights critical weaknesses. Automated containment—such as isolating devices or suspending accounts—reduces dwell time and limits impact. Narayanan warns that AI‑generated attacks increase both speed and precision, enabling automated phishing, rapid vulnerability discovery, and coordinated ransomware campaigns. The risk lies not only in volume but in how quickly exposures are probed. He advocates a prevention‑first approach supported by continuous exposure management, AI‑driven threat intelligence, and automated containment. Securing AI environments, he adds, requires visibility across cloud and network layers, strict access controls, and continuous monitoring to prevent data leakage, model tampering, and unauthorised access.

Biju Unni, VP of Sales at Cloud Box Technologies

Biju Unni, VP of Sales at Cloud Box Technologies, notes that AI‑based threat detection is now deeply embedded in modern SIEM, SOC, and XDR platforms. Security teams can identify attacks early and act immediately by disabling compromised credentials, blocking malicious IPs, or isolating endpoints. He emphasises that today’s teams are trained to be proactive, using AI to anticipate threats rather than simply react. Unni warns that AI‑based attacks are becoming more convincing and harder to detect, with phishing, automated vulnerability discovery, and undetectable malware increasing risk. Mitigation requires MFA, continuous user awareness training, zero‑trust principles, and real‑time threat intelligence. To secure AI models and training environments, he stresses the importance of encrypting data at rest and in transit, monitoring for dataset poisoning, adopting MLOps security practices, isolating training environments, and establishing governance frameworks to ensure accountability and compliance.

Azeem Aleem, Global Executive Director, Cyber Resilience Services at CPX

Azeem Aleem, Global Executive Director, Cyber Resilience Services at CPX, explains that AI significantly enhances detection and response by filtering the “white noise” that overwhelms analysts. AI correlates signals faster and maps adversary tactics more effectively, enabling earlier detection and quicker response. This reduces dwell time and strengthens resilience. Aleem warns that AI gives attackers speed, enabling faster and more deceptive lateral movement. Organisations must think like hackers to identify potential attack paths in their own environments. Defenders can leverage AI to detect behavioural patterns and classify adversary activity early, creating balance against emerging AI‑enabled threats. Securing AI models and pipelines, he adds, requires secure‑by‑design principles, strong code checking, governance, controlled access, and protection of training data integrity. Ensuring models behave safely in production and embedding continuous monitoring and human oversight into the AI lifecycle are essential for cyber resilience.

Zakeer Zubair, Director of Solutions Engineering for the Middle East, Türkiye, and Africa at F5

Zakeer Zubair, Solutions Engineer Leader, Middle East, Türkiye and Africa at F5, argues that security teams must work closely with other departments to embed security into applications, APIs, systems, and processes from the outset. He emphasises the need for an end‑to‑end lifecycle approach to AI runtime security, including the ability to connect and protect AI agents. As enterprises adopt AI across customer experiences and internal workflows, the risk landscape expands to include adversarial manipulation of models, data leakage, unpredictable user interactions, and compliance challenges. Zubair says these risks can be mitigated with guardrails for AI agents and comprehensive API security. He highlights F5’s AI Guardrails and AI Red Teaming capabilities, which help organisations identify vulnerabilities before they reach production and ensure AI systems remain safe, compliant, and resilient throughout their lifecycle.

Kalle Björn, Senior Director, Systems Engineering for ME at Fortinet

Kalle Björn, Senior Director, Systems Engineering for ME at Fortinet, explains that security teams are moving away from manual threat hunting toward automated discovery powered by AI‑driven behavioural analytics. Fortinet correlates telemetry from endpoints, networks, and cloud environments in real time, spotting abnormal behaviour that traditional rules miss. By automating detection, investigation, and response across the Security Fabric, teams can contain threats in seconds rather than weeks. Bjorn warns that AI‑generated attacks expand the scale and sophistication of phishing and automation, enabling adversaries to craft realistic lures and evade signature‑based controls. Mitigation requires AI‑powered inspection, Zero Trust principles, and behavioural analytics that identify malicious intent beyond static signatures. Securing AI models and pipelines, he adds, requires visibility and control over training data and infrastructure, layered AI‑driven detection, monitoring, and Zero Trust enforcement across networks, cloud, and API layers.

Yara AlHumaidan, Red Teaming Specialist, META region at Group-IB

Yara AlHumaidan, Red Teaming Specialist, META region at Group-IB, says security teams leverage AI‑driven behavioural analytics, anomaly detection, and automated triage to identify threats in real time. Machine learning correlates telemetry across endpoints and networks, prioritising high‑risk alerts and enabling rapid containment before attackers escalate privileges or exfiltrate data. She warns that AI‑generated attacks enable scalable phishing, deepfakes, automated malware mutation, and rapid reconnaissance. Mitigation requires zero‑trust architectures, strong identity controls, adversarial testing, and employee awareness. From an end‑user perspective, she stresses the importance of data governance and privacy‑by‑design, including encryption, role‑based access control, DLP, and clear data retention policies. Sensitive data should be masked or anonymised, and organisations must ensure models do not retrain on confidential inputs without consent. Secure isolated environments, vendor risk assessments, and audit logging are essential to prevent unintended data leakage.

Mohammed AlMoneer, Sr. Regional Director, Türkiye, France, Africa & Middle East at Infoblox

Mohammed AlMoneer, Sr. Regional Director, Türkiye, France, Africa & Middle East at Infoblox, explains that the smartest security teams use AI not just to detect threats but also to make decisions. AI aggregates signals across tools, ranks real business risk, and triggers automated playbooks. Success is measured in minutes of dwell time, analyst hours reclaimed, and incidents contained before they escalate. AlMoneer warns that GenAI gives attackers mass personalisation at zero cost, enabling unique lures, deepfakes, and polymorphic malware that break the “patient zero” model. Defenders must adopt pre‑emptive strategies, including continuous exposure management, predictive threat intelligence, and disciplined playbooks that assume every employee and workload can be individually targeted. Securing AI requires treating it as critical infrastructure, with governance for data and model risk, security testing in MLOps pipelines, runtime monitoring, and clear accountability between CISOs, CDOs, and engineering teams.

Essam Seoud, Head of Enterprise Sales, for the Middle East, Turkiye and Africa at Kaspersky

Essam Seoud, Head of Enterprise Sales for the Middle East, Turkiye and Africa at Kaspersky, says AI‑driven systems reduce dwell time by automating triage and initial response steps. Automation playbooks can be triggered instantly, while the system suggests actions analysts can accept or reject. By filtering alert noise and minimising routine work, AI allows experts to focus on complex cases and accelerate containment. Seoud warns that AI‑generated attacks enable realistic phishing, deepfake impersonation, voice cloning, and automated exploitation. As organisations adopt more digital and AI‑driven systems, their attack surface expands, creating interconnected vulnerabilities. Mitigation requires automated detection, human expertise, continuous threat intelligence, and strong employee awareness. Securing AI systems, he adds, requires protecting data and models throughout their lifecycle, enforcing access controls, validating data sources, encrypting sensitive data, and monitoring for tampering or anomalies.

Raoul Van Engelshoven, Managing Director for the Middle East, Kyndryl

Raoul Van Engelshoven, Managing Director for the Middle East, Kyndryl, explains that AI accelerates detection and response by analysing large volumes of telemetry across devices, identities, and applications. Generative AI assists analysts by collecting, sorting, and conducting initial incident analysis, reducing manual workload and speeding investigations. As capabilities mature, AI enables proactive threat hunting and improved visibility across hybrid environments. Van Engelshoven warns that AI‑generated attacks increase the scale and sophistication of phishing, deepfakes, and social engineering, lowering the barrier for attackers. Mitigation requires employee education, formal governance, and zero‑trust architecture. Securing AI systems demands strong governance frameworks, privacy controls, cross‑functional oversight, and continuous verification of users and devices.

Haider Pasha, Chief Security Officer at Palo Alto Networks, EMEA

Haider Pasha, Chief Security Officer at Palo Alto Networks, EMEA, underscores that AI‑powered security has become essential as attackers now exfiltrate data within hours rather than days. He explains that the only effective countermeasure is replacing manual correlation with machine‑speed analysis. By unifying telemetry from endpoints, networks, and cloud environments into a single data lake, organisations can detect multi‑stage attacks autonomously and contain them in real time, shifting the SOC from reactive firefighting to proactive defense. Pasha warns that AI‑generated attacks—highly personalised social engineering, context‑aware phishing, and polymorphic malware—are outpacing traditional security controls. He advocates a multi‑layered strategy that secures the entire AI application lifecycle, from autonomous agents to underlying models. For him, true resilience requires a “code‑to‑cloud‑to‑SOC” approach, where Policy‑as‑Code, Infrastructure‑as‑Code, CI/CD scanning, isolated training environments, and strict data lineage work together to protect AI models, pipelines, and production systems from tampering, poisoning, and emerging AI‑driven threats.

Ezzeldin Hussein, Regional Senior Director, Solution Engineering, SentinelOne

Ezzeldin Hussein, Regional Senior Director, Solution Engineering, SentinelOne, says AI‑based detection and response automates remediation in real time, correlates threats, and analyses behaviour across endpoints, identities, and cloud workloads. This unified intelligence enables autonomous containment and proactive investigation at machine speed. Hussein warns that AI‑generated attacks include deepfakes, automated exploits, and hyper‑personalised phishing. Mitigation requires AI‑based defense, behavioural detection, prompt‑layer protection, and continuous identity verification, supported by human analysts. Securing AI models and pipelines requires runtime monitoring, threat detection, strong access controls, data integrity verification, and securing supply chains, APIs, and prompts.

Tidiane Lo, Vice President, Westcon, MEA at Westcon-Comstor

Tidiane Lo, Vice President, Westcon, MEA at Westcon-Comstor, explains that AI‑driven analytics correlate signals across networks, endpoints, and cloud environments in real time, spotting threats faster and automating response. Distributors help partners deploy multivendor solutions effectively, enabling customers across the region to cut dwell time to minutes. Lo warns that AI‑powered attacks scale phishing, deepfakes, and malware with alarming speed. Mitigation requires AI‑enabled defenses, strong identity controls, and continuous training. Securing AI models and pipelines requires visibility, governance, and trusted architectures, ensuring innovation does not come at the cost of risk.