AI a critical component in information security

AI has become a critical component in information security as it can swiftly evaluate millions of events and identify a variety of risks. Sohrob Kazerounian, senior data scientist at Vectra AI elaborates to Security MEA on AI in cybersecurity.

How can AI identify between “good” and “bad” behavior and act accordingly?
The process of training an artificial intelligence to distinguish between “good” and “bad” behavior is, interestingly, not totally dissimilar from training a human intelligence — what normal people might refer to as a child — to do the same.

By collecting a set of examples that we label as “good” and “bad”, both child and AI begin by seeing an example, and making a random guess at the type of behavior they are observing. When they make a mistake, they get feedback about the error, and update their internal models so that they are less likely to make similar mistakes on future examples. What is critical however, and indeed makes the task of training somewhat of an art in both child and AI, is the fact that we need the examples used in training to be representative of the types of examples they are likely to see in real-world scenarios. If we carelessly select our training examples, we may end up sending child and AI into the world with biased and potentially catastrophic models of the underlying behaviors they were intended to learn to identify. Because the behaviors are sufficiently complex that we cannot simply enumerate a set of rules to distinguish good vs bad behavior, we ideally strive to select training examples, across which learning extracts a set of invariant features, that allow the identification of good and bad behavior in the wild, even if they have never been observed by the child or AI before.

In another set of learning scenarios, AI systems may not have access to any feedback from a teacher whatsoever — that is, they are never given a set of examples with explicit labels indicating good vs bad behavior. In these cases, the AI system might attempt to autonomously compare what it is observing to see how normal it is with respect to the full history of things it has seen. Often times, when something stands out as being abnormal, while it may not necessarily indicate “bad” behavior, we might ask the AI system to flag it as unusual, and subsequently use expert knowledge from, for example, security professionals in order to determine the actual “badness” of the observation.

Can AI handle cyber threats with zero human intervention. Elaborate.
A proper answer to the question requires a deeper examination of what we mean by “handle” and also “zero human intervention”. On the latter, one could make the argument that at some level, a human has helped to specify, design, build and train the AI system in question. While the AI may subsequently behave autonomously, all of its actions are in some way a reflection of the design choices made by its creator. The training data used, the particulars about the model that went into the system, and most importantly, the objective function that guides the AI system’s learning process — all fundamentally reflect choices on the part of the person deploying the AI.

Nevertheless, when it comes to dealing with cyber threats, we might further clarify what we mean by “handle”. It is certainly the case that AI systems can autonomously monitor and detect cyber threats in various types of data (whether network, cloud logs, endpoint, etc). “Handling” however is at the moment far more tenuous. While AI systems can monitor things that are far beyond the scope of human capability (e.g., an AI system can ingest massive amounts of encrypted network traffic as it flows in and out of a network, and detect underlying malicious behaviors), their ability to respond to those detections is far more limited than a human operator. AI systems are simply not yet intelligent enough to reason about the types of options they might have in responding to an ongoing threat, much less assessing the full spectrum of technological and financial consequences of any such actions. It is likely to be some time before AI systems rival humans in terms of true reasoning capabilities, and consequently, some time before they respond to cyber threats as intelligently.

Why does AI security matter?
Both the use of AI in security, and the security of AI, are increasingly critical areas in our world today. Firstly, the sheer amount of our personal, social and economic lives exposed to potential attackers is greater than ever before in the course of human history. Whatever we might think of the pre-internet era, at the very least if someone wanted to rob you of your things, they had to at least get off the couch. Because of the quantity of information we have exposed to adversaries, and the impossibility of protecting it in any sort of manual fashion, the use of AI in security is not simply recommended, but is rather an absolute necessity.

Interestingly, a more recently emerging trend is that AI systems increasingly mediate how we interact with the world. From autonomous cars, to automated AI agents answering phone calls, the types of technologies that made it possible to detect cyber threats in massive amounts of encrypted traffic, are themselves becoming attack vectors. As such, it is becoming likelier than ever that attackers will find utility in attacking such systems. It may be tempting to pay ransom to someone who has encrypted all the files on your computer and refused to give back access. It will only be that much more tempting to pay the ransom when an attacker has taken control of the AI system driving your autonomous vehicle.

What are potential AI security disadvantages?
The disadvantages of AI systems in security are not fundamentally different than the disadvantages of other humans in security. AI systems can sometimes be opaque, and the more complex they are the harder it can be to understand why they make some of the decisions that they do. That said, anyone who has dealt with security professionals likely knows that other humans can be quite opaque as well, and may not be able to explain their intuitions in any more sensible a manner than an AI system.

What ultimately counts is to realize that AI in security is a tool. As with any tool, you should do your best to understand how it works, what its limitations are, and ultimately, how to maximize its utility to you and your organization.