CyberScout

Will Deepfakes Be a Cyber Threat in 2021?

Deepfake
Getty Images

Deepfakes have gotten a lot of attention in recent years, but their deployment by cybercriminals or hackers has still been relatively limited. We can all help keep it that way by familiarizing ourselves with threats before they become realities. As with any potential cybercrime, deterrence here will be aided by an awareness of what deepfakes are, how they work, and what they can and can’t do.

A deepfake is a combination of Artificial Intelligence “deep learning” and that watchword of the 2010s:  “fake.” 

A deepfake can be a digital image, video, or audio file. Any digital media asset created with the assistance of Artificial Intelligence qualifies. 

A few examples of deepfakes: 

Facebook CEO Mark Zuckerberg admitting to various misdeeds including enabling genocide

Mark Zuckerberg

an audio clip of popular podcast host Joe Rogan, and perhaps most startling; software that enables real-time deepfakes on video conferencing platforms of well known people, including Steve Jobs, Eminem, Albert Einstein, and the Mona Lisa.

Mona Lisa

 

While doctored videos or photos are sometimes labeled deepfakes, true deepfaked files are typically created using algorithms that create composites of existing footage, effectively “learning” to identify faces and voices and combining them to create new content. A website called “This Person Does Not Exist” demonstrates the potential of this technology by presenting eerily lifelike photos of fictional people assembled in real-time by amalgamating thousands of photos.

This person does not exist.

An artificially generated “person.” Source: ThisPersonDoesNotExist.com​​​​​

 

How Big of a Cybersecurity Threat Are Deepfakes?

Deepfakes have the ability to deceive, which makes them a threat. A 2018 deepfaked video of Barack Obama synced to an audio track created by comedian Jordan Peele sparked popular demand that technology companies more actively filter out content utilizing the technology, citing concerns about potential election interference.

“There is a broad attack surface here — not just military and political but also insurance, law enforcement and commerce,” said Matt Turek, a program manager for the Defense Advanced Research Projects Agency to the Financial Times.  

These concerns were apparently validated by a 2019 incident where deepfaked audio technology was used to scam a CEO out of $243,000. The unnamed UK executive was fooled into wiring money to someone claiming to be the chief executive of his parent company. According to the victim, the caller had convincingly replicated his employer’s German accent and “melody.” 

Despite the above examples, the widespread threat posed by deepfakes has yet to materialize, at least not up to this point. The technology is primarily still used for viral videos and adult content and not the sort of high-tech cyberespionage that has worried computer scientists, security experts, and politicians alike.

One of the reasons why deepfakes haven’t been deployed at their full threat potential has to do with the way they are generated: at this point in the technology’s evolution the deep learning and AI algorithms required to generate a convincing deepfake implement huge amounts of sample content. 

Barack Obama and Mark Zuckerberg are some of the most famous people on the planet, meaning that there are hundreds, if not thousands, of available hours of video to be able to employ into creating a deepfake. Politicians and entertainers find themselves in front of cameras or being recorded. The average target of a scam or a cyberattack does not have such a catalog of images and sounds, placing limitations on how much can be “learned” by an AI program.

Another factor limiting the spread of deepfakes: Scammers don’t need them. There are plenty of low-tech ways to fool people. A “fake” deepfake 2019 video of Nancy Pelosi was viewed by millions and was retweeted by President Trump; it was a speech the teetotaling Speaker of the House had given earlier played back at a slower speed. Likewise, the audio track in a widely distributed deepfake of then-President Obama wasn’t compiled by AI, but rather recorded by a skilled impersonator.

Scammers will often cold-call targets pretending to be relativessupervisors, co-workers, tech support, without any need for high-tech solutions. Providing a target with a sense of urgency combined with a convincing story is all a scammer needs to get someone to install malware, assist in the commission of wire fraud, or surrender sensitive information. 

That doesn’t mean deepfakes are harmless. The barriers for scammers looking to create convincing digital frauds can and will inevitably diminish. As deepfakes grow in popularity, we can expect to see new apps create faster, more convincing, and cheaper digital fakes.

The best defense against scams or cyberattacks that exploit deepfake technology is knowledge. It is harder to dupe informed people.