The winter of AI?

Raffael Marty, Vice President of Research and Intelligence, Forcepoint, says cybersecurity AI in the purest sense is nonexistent, and predicts it will not develop in 2019.

In addition to the myriad of constantly evolving threats in today’s landscape, organizations are hampered by an ongoing skills shortage— analysts predict a shortfall of 3.5 million cybersecurity jobs by 2021.  In an attempt to fill the void, organizations have turned to the promise of big data, artificial intelligence (AI), and machine learning.

And why not? In other industries, these technologies represent enormous potential. In healthcare, AI opens the door to more accurate diagnoses and less invasive procedures. In a marketing organization, AI enables a better understanding of customer buying trends and improved decision making. In transportation, autonomous vehicles represent a big leap for consumer convenience and safety; revenue from automotive AI is expected to grow from $404 million in 2016 to $14 billion by 2025.

The buzz for cybersecurity AI is palpable. In the past two years, the promise of machine learning and AI has enthralled and attracted marketers and media, with many falling victim to feature misconceptions and muddy product differentiations. In some cases, AI start-ups are concealing just how much human intervention is involved in their product offerings. In others, the incentive to include machine learning-based products is one too compelling to ignore, if for no other reason than to check a box with an intrigued customer base.

Today, cybersecurity AI in the purest sense is nonexistent, and we predict it will not develop in 2019. While AI is about reproducing cognition, today’s solutions are actually more representative of machine learning, requiring humans to upload new training datasets and expert knowledge. Despite increasing analyst efficiency, at this time, this process still requires their inputs—and high-quality inputs at that. If a machine is fed poor data, its results will be equally poor. Machines need significant user feedback to fine-tune their monitoring; without it, analysts cannot extrapolate new conclusions.

On the other hand, machine learning provides clear advantages in outlier detection, much to the benefit of security analytics and SOC operations. Unlike humans, machines can handle billions of security events in a single day, providing clarity around a system’s “baseline” or “normal” activity and flagging anything unusual for human review. Analysts can then pinpoint threats sooner through correlation, pattern matching, and anomaly detection. While it may take a SOC analyst several hours to triage a single security alert, a machine can do it in seconds and continue even after business hours.

However, organizations are relying too heavily on these technologies without understanding the risks involved. Algorithms can miss attacks if training information has not been thoroughly scrubbed of anomalous data points and the bias introduced by the environment from which it was collected. In addition, certain algorithms may be too complex to understand what is driving a specific set of anomalies.

Aside from the technology, investment is another troublesome area for cybersecurity AI. Venture capitalists seeding AI firms expect a timely return on investment, but the AI bubble has many experts worried. Michael Woodridge, head of Computer Science at the University of Oxford, has expressed his concern that overhyped “charlatans and snake-oil salesmen” exaggerate AI’s progress to date. Researchers at Stanford University launched the AI Index, an open, not-for-profit project meant to track activity in AI. In their 2017 report, they state that even AI experts have a hard time understanding and tracking progress across the field.

A slowdown of funding for AI research is imminent, reminiscent of the “AI Winter” of 1969, in which Congress cut funding as results lagged behind lofty expec­tations. But attacker tactics are not bound by investments, allowing for the continued advancement of AI as a hacker’s tool to spotlight security gaps and steal valuable data.

The gold standard in hacking efficiency, weaponized AI offers attackers unparalleled insight into what, when, and where to strike. In one example, AI-created phishing tweets were found to have a substantially better conversion rate than those created by humans. Artificial attackers are formidable opponents, and we will see the arms race around AI and machine learning continue to build.

(Courtesy: 2019 Forcepoint Cybersecurity Predictions Report)