Growing fears of AI use by criminals

Cybersecurity expert Eng. Samer Omar, CEO for VirtuPort said that in an ever-increasingly digitized world, traditional cyber defense methods are no longer adequate to counter the current cyber threats. He explained, “The increased likelihood of artificial intelligence being used by adversaries has pushed companies to continue to implement methods of detection and deception in an effort to provide counter-intelligence”.

Eng. Samer Omar, CEO at VirtuPort

In a recent article by Martin Giles he states that AI for cybersecurity is a hot new thing—and a dangerous gamble. Machine learning, and artificial intelligence can help guard against cyberattacks, but hackers can foil security algorithms by targeting the data they train on and the warning flags they look for.

The cybersecurity expert added, there had been a steady increase in sales of cybersecurity solutions which leverage machine learning and artificial intelligence technologies enabling them to instantly detect any malicious behavior on the network, quickly respond to incidents and reduce impacts of a breach.

He stressed that artificial intelligence will remain a key capability in the field of cybersecurity for years to come, especially with the increased adoption in IoT, Cloud, Digital Transformation and Industry 4.0.

Cybersecurity companies rely on automated learning algorithms to analyze large volumes of data in order to learn what to monitor on networks, systems and applications. They must learn how to interact with different use cases and scenarios which typically require human intervention in the form of upgrades, patches or configuration changes while the promise of AI is that the corrective action would be made automatically based on decisions made by the technology.

There’s a danger that cybersecurity companies will overlook ways in which the machine-learning algorithms could create a false sense of security. A recent British study published in February warned of the use of artificial intelligence to increase cyber-attacks, cause auto accidents or turn commercial drones into threatening weapons.

She said that the rapid progress in artificial intelligence increases the likelihood of misuse by rogue states, criminals and perpetrators of individual attacks. In Symantec 2018 Predictions on Cybersecurity; Cybercriminals will use AI & ML to conduct attacks was ranked #2.

Many products being rolled out involve “supervised learning,” which requires firms to choose and label data sets that algorithms are trained on—for instance, by tagging code that’s malware and code that is clean. Another risk is that hackers who get access to a security firm’s systems could corrupt data by switching labels so that some malware examples are tagged as clean code. The bad guys don’t even need to tamper with the data; instead, they could work out the features of code that a model is using to flag malware and then remove these from their own malicious code so the algorithm doesn’t catch it.