Human‑Centric Defences Critical as AI Scams Grow More Sophisticated

Javvad Malik, Lead CISO Advisor at KnowBe4, warns that AI‑enabled scams are outpacing users through automation and cultural precision. Stronger habits, human‑centric defences, and a culture of verification are now essential.

Scam tactics seem to evolve faster than public awareness. What are the core factors driving this widening gap between attacker innovation and user readiness?  
The economics of crime currently favour the attackers. There are many services that offer automated attacks, such as Phishing-as-a-Service (Phaas) and Crime-as-a-Service, which provide templates and scripts ready to use with a click of a button.

Victims, on the other hand, have much cognitive overload, juggling multiple channels such as email, WhatsApp, Teams, SMS, etc., and a constant state of urgency.

As a result, criminals are able to take advantage of their fatigue and reduced judgement. Couple this with the ease at which digital trust is being exploited with real brands, suppliers and government portals being mimicked every day.

How is AI, especially generative AI, changing the sophistication and scale of phishing attacks targeting organisations in the Middle East?  
There are probably three things AI is doing particularly well in the Middle East region. Language agility, cultural context and volume. AI can create better Arabic and more localised English – with phrases which seem perfectly normal to locals. It can also build stronger pretexts by referencing local events, major conferences, Ramadan or Eid timings, visa changes, tax changes, etc. Finally, it can scale extremely well across multiple channels – all orchestrated as part of a single campaign.

We’re seeing hyperpersonalised phishing that mimics tone, writing style, and even internal workflows. What makes these AI-driven attacks so difficult for traditional security tools to detect?  
Because the content looks right, and the behaviour appears normal, most traditional security will assume it is normal. Also, many of these attacks build on trust. They use compromised accounts or use legitimate SaaS tools to pass reputation checks. Ultimately, these are low-malware and high-persuasion attacks. They don’t need payloads, they just rely on a human to approve, pay or login.

From KnowBe4’s threat intelligence, what emerging phishing or social engineering trends are most concerning for enterprises in this region over the next 12–18 months?  
Some common trends cropping up across the Middle East include MFA bypass and MFA fatigue attacks. The use of non-email channels, such as WhatsApp, Teams, Slack, which include fake IT Support messages and malicious OAuth consent links. We’re also seeing an increase in executive deepfakes in voice and video to convince employees to make payments or take other actions.

Many organisations still rely heavily on legacy awareness training. Why is this no longer enough in an era where AI can generate convincing scams in seconds?  
Many times, knowledge is not the issue; the issue is conditions and controls. Training can teach people to spot scams, but modern lures often look legitimate. Awareness and training still matter, but they need to be timely, relevant and adaptive, alongside providing other tools to help people identify and report anything suspicious. To support this, a culture of security needs to be created that empowers employees to make sound security choices.

What role does continuous, behaviour-driven security awareness play in helping employees recognise and resist AI-powered phishing attempts?  
This is critical to help employees create useful habits. Short, frequent nudges tied to real threats and simulations which reflect your reality can turn a disconnected piece of content into something useful and meaningful.

The more we can make security awareness and reporting a natural part of people’s workflows, the more likely it is that the right behaviours will kick in when a real attack occurs.

How should CISOs rethink their defence strategies to counter AI-enabled social engineering, beyond just deploying more technology?  
Beyond adopting more technology, CISOs should rethink their strategies to put the human in the center and build everything around them. Think of ways to reduce the number of attacks which reach a user’s inbox or Slack channels, deploy relevant training to them, build a culture and empower people to become part of your security workforce, and finally have in place safety nets which can protect the organisation in case a mistake does occur. If one user clicking on one link can cause an organisation to shut down, then the fault does not lie with the user.

If you could give Middle East enterprises one urgent recommendation to strengthen resilience against fast-evolving scam tactics, what would it be?  
Build a supportive culture where it’s ok to question anything. Even if it’s an email from the CEO, staff should be able to question it and verify it for legitimacy. Until we can empower people to do so, it doesn’t matter what tools we deploy.

For this, ask yourself 3 things:

  • Is this communication expected?
  • Does it trigger an emotional response (fear, greed, anger…)
  • Is there a tight timeline or secrecy attached to it?

If so, then double-check.