Attackers vs Defenders: Who Will Come Out On Top In The AI Battle

As Artificial intelligence technology evolves to become more advanced, Steve Foster, Head of Solutions Engineering, MEA at Netskope talks about the impact of AI on cybersecurity and who will win the battle eventually, whether attackers or defenders

As artificial intelligence (AI) advances, there is a lot of chatter on LinkedIn and other online media on the advantages it may bring for either the threat actors or the security defence teams. But in reality, while AI could be a powerful tool for cybercriminals, allowing them to automate attacks and evade detection more effectively, it could also be a powerful tool for security professionals, allowing them to detect and respond to threats more quickly and efficiently.

The polarising predictions beg to ask the question – might the impact of AI advances in cybersecurity end up balancing out? Could every advance made by the bad guys be met with equal progress from the good guys? All supported by the same tools? Of course, this balancing act only works as long as everyone keeps up with the competition; as cybercriminals become more adept at using AI, security professionals will also have to ensure they are making use of more advanced tools and techniques to defend against these attacks.

So, in a boxing match style, let’s take a look at who is in the red corner and who is in the blue.  Exactly how might cybercriminals and security professionals make use of AI? And who will win?

In the red corner: The Cyber Criminals

  1. Using AI for the identification of targets, scanning the internet for vulnerable systems
  2. Programming AI bots to mimic human behaviour in order to evade detection by security systems more effectively.
  3. Generating highly targeted phishing emails with AI, perhaps trained by multiple data sets acquired on the Dark Web so that they include credible details to help lure the target and build trust (with the public becoming more used to interacting with AI-bots for customer service, impersonating these chatbots could become a useful social engineering tool for malicious actors).
  4. Creating evermore sophisticated malware, such as using AI to find exploitable patterns in security systems and creating malware that is specifically designed to evade detection, or designing AI-powered evolution into malware, so that malicious programmes adapt and evolve over time, making them more difficult to detect and remove.

In the blue corner – The Security Professionals

  1. Analysing vast amounts of data from multiple sources to identify and track potential threats. Threat Intelligence systems can also learn from past incidents, allowing them to adapt and improve over time. It’s expected that much of this AI intelligence gathering will be done by security vendors, and made available to customers and community members.
  2. Offering just-in-time training to the workforce before an incident occurs by identifying risky behavioural patterns, and guiding employees to make better decisions for data protection and system security.
  3. Triage; sorting through security incidents and prioritising them based on their level of risk. Using AI recommendations to focus efforts on the most critical issues.
  4. Detecting patterns that indicate a potential security incident, then automatically triggering a response and alert to security teams (recently proven in its efficacy by NATO).
  5. Automating incident investigation; helping identify the root cause of an incident and notifying relevant parties.

So, the big question remains – who will be the winner in this title bout? Unfortunately, it’s too early in the lifecycle for us to be sure. AI has the potential to revolutionise cybersecurity, but it won’t remove the requirement for a clear architecture and strategy.  Machines aren’t overtaking human jobs just yet. Over the coming months and years, it will be important to understand that AI is not a cure-all, or a standalone solution, but rather a complementary tool to be used in combination with other security measures. Just like a human security team, AI requires continuous monitoring, evaluation, and tuning to ensure that it is performing as expected and to address any bias or inaccuracies in the data. And of course, there are plenty of ethical considerations to be handled too.