Diego Arrabal, Vice President, Eastern Europe, Middle East and Africa, Check Point Software Technologies, explains that AI is reshaping how organisations operate and make decisions, demanding a new cybersecurity model focused on trust, control, integrity and prevention across increasingly complex, AI‑driven environments.
Artificial intelligence is no longer an extension of digital transformation. It is quietly redefining the conditions under which organisations operate, how decisions are made, how risk is assessed and how trust is maintained.
That shift is already visible. What makes this moment different is not only the speed of adoption, but the depth of integration. AI is no longer a layer applied to optimise efficiency. It is becoming embedded into the core of digital systems, influencing workflows, shaping outcomes and, in some cases, operating with a degree of autonomy.
This evolution is fundamentally changing the role of cybersecurity, often in ways that are only beginning to be fully understood.
In the UAE, that shift is playing out at scale. National and sector-level strategies are increasingly designed with AI at their centre, not as an afterthought. The ambition to position the Dubai International Financial Centre as a global hub for AI-driven financial services reflects a broader direction, one that brings together infrastructure, regulation, talent and governance under a single objective: building an AI-native ecosystem.
This matters for cybersecurity because the nature of what needs to be protected is changing.
For decades, security strategies have focused on safeguarding systems, networks and data. Those priorities remain critical, but they are no longer sufficient on their own. As AI becomes embedded into decision-making processes, the question is no longer just whether systems are secure, but whether their outputs can be trusted.
That distinction is subtle, yet significant.
An AI model that produces inaccurate or manipulated outcomes can introduce risk without triggering traditional security alerts. A system behaving exactly as designed can still lead to unintended consequences if the data it relies on is compromised. In this environment, cybersecurity extends beyond protection. It becomes a question of control, integrity and reliability.
At the same time, AI is accelerating both sides of the equation.
On one hand, it is strengthening defence. Machine learning models can analyse vast volumes of data in real time, identify patterns that would otherwise go unnoticed and reduce response times from hours to seconds. In complex, distributed environments, these capabilities are no longer optional; they are becoming essential.
On the other hand, AI is lowering the barrier to entry for attackers.
Threat actors are already using AI to automate reconnaissance, generate convincing phishing campaigns and develop malicious code at speed. What once required specialised expertise can now be executed with far less effort. The result is not only more attacks, but faster ones, compressing the window between breach and impact.
This compression is forcing a rethink of traditional security models.
Approaches that rely heavily on detection and response assume there is time to act. In AI-driven environments, that assumption does not always hold. When systems operate in real time, reacting after the fact becomes less effective. Security needs to move closer to the point of action, with a stronger emphasis on preventing risks before they materialise rather than responding once they have already taken hold.
In practice, this means detection alone is no longer sufficient. By the time an issue is identified, the impact may already be in motion, making recovery more complex and costly.
Addressing this requires resilience to be built into how systems function from the outset.
It also calls for visibility across increasingly complex and distributed environments, spanning cloud platforms, on-premise infrastructure, SaaS applications and edge systems. Organisations need a clearer understanding of how data flows between these environments, how AI models interact with that data and where vulnerabilities may emerge. Most importantly, it requires defined boundaries around how systems are allowed to behave.
Securing AI requires addressing the full lifecycle, from how employees use AI tools and copilots in daily workflows, to how customer-facing AI applications and autonomous agents are protected at runtime, to the GPU infrastructure, training pipelines and inference APIs that power these systems. Each layer introduces distinct risks that traditional security architectures were never designed to address.
Without those controls, scale itself becomes a risk multiplier.
There is also a human dimension that cannot be overlooked. As AI tools become embedded into everyday workflows, employees are interacting with multiple platforms that process and generate sensitive information, often without full visibility into how that data is used or stored. Governance, in this context, is not a constraint on productivity or innovation. It is what allows organisations to scale AI adoption while maintaining control over how information is accessed and shared.
The UAE’s approach suggests a clear understanding that innovation and control must develop in parallel. The focus is not only on accelerating AI adoption, but on building the frameworks that allow it to be deployed responsibly.
That balance will become increasingly important.
AI will continue to expand what organisations are capable of achieving. The real challenge is ensuring that these capabilities can be delivered consistently, securely and at scale, not as isolated point solutions, but as part of an integrated approach that works across hybrid environments, prevents threats before damage occurs and secures AI transformation while using AI to strengthen defence.
Check Point’s evolution reflects this direction. From its origins in firewall leadership, the company has expanded into broader security management, cloud, workspace, SASE, threat intelligence and exposure management, and more recently, through strategic moves such as Cyberint, Veriti, the Wiz partnership and Lakera, it has strengthened its ability to secure AI systems from employee usage to autonomous agents to the infrastructure that runs them.
The organisations that recognise this early will not only be better protected. They will be better positioned to lead in an economy where intelligence is embedded into everything.











