CyberScout

The AI Cybersecurity Arms Race Will Continue in 2021

AI-enabled cybersecurity
Getty Images

The good news first:

In the face of overwhelming hacking attempts, phishing emails, a skills shortage, and an ever-increasing attackable surface, cybersecurity companies and experts are increasingly turning to AI solutions to defend networks and devices. 

Advanced machine learning algorithms are deployed to identify phishing emailsmalware attacks and general suspicious or out-of-the-ordinary behavior on networks that could suggest a cyberattack. 

The bad news is that hackers have access to the same technology.

In much the same way that cybersecurity AI and machine learning can be leveraged to scan and analyze massive amounts of data to identify a phishing attack, hackers and other threat actors also have an enormous amount of data at their disposal, especially from previous data breaches and leaks. 

While deepfakes tend to garner more attention as the preferred AI-fueled method for hackers and threat actors, the potential attacks have a much broader reach.

As a proof of concept in 2017, researchers at the Stevens Institute of Technology used data from two large-scale data breaches where millions of passwords had been compromised. By analyzing tens of millions of passwords from a compromised gaming site, the AI enabled network was able to artificially generate hundreds of millions of passwords based on patterns it identified. When applied to a set of 43 million compromised LinkedIn passwords, it was able to crack them with 27 percent accuracy.  

Although this was only an experiment, more powerful programs exist. One program discovered in February 2020 reportedly had the capacity to analyze more than a billion compromised login and password credentials and generate new variations. This represents an evolutionary step beyond credential stuffing (a crime where the target’s passwords are used to access other accounts). AI makes it possible to identify patterns and correctly guess passwords.

IBM created a similar experiment with DeepLocker, and created a form of WannaCry ransomware that could evade detection by sophisticated antivirus and malware detection tools, and ultimately deliver its payload by recognizing the face of a laptop user (by hijacking their camera). This combined two AI-enabled methods of deploying malware, one to recognize the patterns of security software and avoid detection, and the other to scan available images of faces online to be able to recognize a specific target. 

“[T]hese AI tools are publicly available, as are the malware techniques being employed — so it’s only a matter of time before we start seeing these tools combined by adversarial actors and cybercriminals. In fact, we would not be surprised if this type of attack were already being deployed,” said DeepLocker’s creators.

While it’s uncertain exactly how many threat actors or hackers are actively utilizing AI and machine learning, governments and technology firms alike are taking the threat seriously and actively building defenses. This is considered likely to lead to an ongoing arms race between cybersecurity firms and hacking groups, and is guaranteed to continue to escalate for years to come.