In a world where your online identity links to you directly, the prospect of perfect replication is worrying. But that’s exactly what we face with the advent of deepfake technology.
As the technology becomes cheaper and easier to use, what are the dangers of deepfakes? Furthermore, how can you spot a deepfake versus the real deal?
What Is a Deepfake?
A deepfake is the name given to media where a person in the video or image is replaced with someone else’s likeness. The term is a portmanteau of “deep learning” and “fake” and uses machine learning algorithms and artificial intelligence to create realistic-yet-synthetic media.
At its most basic, you might find a face superimposed onto another model. At its rapidly developing worst, deepfake technology stitches unsuspecting victims into fake pornographic videos, fake news, hoaxes, and more.
You can read more about the origins of the technology in our deepfake explainer.
What Are the Dangers of Deepfakes?
Fake images have always existed. Figuring out what is fake and what is not is a common part of life, especially after the rise of digitized media. But the issues deepfake technology creates are different, bringing unparalleled accuracy to fake images and fake videos.
One of the first deepfake videos to hit a wider audience was Jordan Peele impersonating Barack Obama in a video discussing the very issue at hand:
The video appears crude, with a strange voice and grainy artifacts on the cloned face. Nonetheless, it illustrates deepfake technology.
Or have you ever wondered what it would have been like if Will Smith played Neo in The Matrix instead of Keanu Reeves (I mean, who hasn’t?!)? Wonder no more:
These two videos are not malicious, taking hundreds of hours of machine learning to compile. But the same technology is available to anyone with enough time to learn and the computing power to go with it. The barrier to using deepfake technology was quite high initially. But as the technology improves and barriers for entry drop significantly, people find negative and harmful uses for deepfakes.
Before we delve into the dark side of deepfakes, here’s Jim Carrey replacing Jack Nicholson in The Shining:
1. Fake Adult Material Featuring Celebrities
One of the major threats from deepfake technology is synthetic adult material, or deepfake porn as it is known. There are tens of thousands of fake adult videos featuring the faces of prominent female celebrities, such as Emma Watson, Natalie Portman, and Taylor Swift.
All use deepfake machine learning algorithms to stitch the celebrity face onto a female adult actress’s body, and all attract tens of millions of views across the numerous adult content websites.
Yet none of these sites do anything about celebrity deepfakes.
“Until there is a strong reason for them to try to take them down and to filter them, nothing is going to happen,” says Giorgio Patrini, CEO and chief scientist at Sensity, a deepfake detection and analysis firm. “People will still be free to upload this type of material without any consequences to these websites that are viewed by hundreds of millions of people.”
The videos are exploitative and far from victim-free, as some deepfake creators allege.
2. Fake Adult Material Featuring Regular People
What’s worse than synthetic porn featuring celebrities? That’s right: fake adult material featuring unsuspecting women. A Sensity study uncovered a deepfake bot on the social messaging app, Telegram, that had created over 100,000 deepfake nude images. Many of the images are stolen from social media accounts, featuring friends, girlfriends, wives, mothers, etc.
The bot is a major advancement in deepfake technology, as the image uploader doesn’t need existing knowledge of deepfakes, machine learning, or AI. It is an automated process that requires a single image. Furthermore, the Telegram bot appears to only work with women’s images, and premium subscriptions (more images, removed watermark) are ridiculously cheap.
Like the celebrity deepfakes, the Telegram bot deepfake images are exploitative, abusive, and amoral. They could easily find their way to the inbox of a husband, partner, family member, colleague, or boss, destroying lives in the process. The potential for blackmail and other forms of extortion is very high and it ramps up the threat from existing issues, such as revenge porn.
Posting the deepfakes on Telegram creates another issue, too. Telegram is a privacy-focused messaging service that doesn’t interfere with its users too much. It does have a policy of removing porn bots and other bots relating to adult material but has done nothing in this case.
3. Hoax Material
You’ve seen Jordan Peele playing Obama. In that video, he is warning of the dangers of deepfakes. One of the major worries regarding deepfake technology is that someone will create and publish a video so realistic it leads to a tragedy of some form.
At the most extreme end of the scale, people say deepfake video content could trigger a war. But there are other major consequences, too. For example, a deepfake video featuring a major corporation or bank CEO making a damaging statement could trigger a stock market crash. Again, it is extreme. But real people can check and verify a video, whereas global markets react instantly to news, and automated selloffs do happen.
The other thing to consider is volume. With deepfake content becoming increasingly cheap to create, it raises the possibility of huge amounts of deepfake content of the same person, focusing on delivering the same fake message in different tones, places, styles, and more.
4. Denying Real Material
As an extension of hoax material, you must consider that deepfakes will become incredibly realistic. So much so that people will begin to question whether a video is real or not, regardless of the content.
If someone commits a crime and the only evidence is video, what is to stop them saying, “It’s a deepfake, it’s false evidence”? Conversely, what about planting deepfake video evidence for someone to find?
5. Fake Thought Leaders and Social Contacts
There have already been several instances involving deepfake content posing as thought leaders. Profiles on LinkedIn and Twitter detail high-ranking roles in strategic organizations, yet these people do not exist and are likely generated using deepfake technology.
That said, this isn’t a deepfake-specific issue. Since the dawn of time, governments, spying networks, and corporations have used fake profiles and personas to gather information, push agendas, and manipulate.
6. Phishing Scams, Social Engineering, and Other Scams
Social engineering is already an issue when it comes to security. People want to trust other people. It is in our nature. But that trust can lead to security breaches, data theft, and more. Social engineering often requires personal contact, be that over the phone, using a video call, and so on.
Suppose someone could use deepfake technology to mimic a director to gain access to security codes or other sensitive information. In that case, it could lead to a deluge of deepfake scams.
How to Spot and Detect Deepfakes
With deepfakes increasing in quality, figuring out to spot a deepfake is important. In the early days, there were some simple tells: blurry images, video corruptions and artifacts, and other imperfections. However, these telltale issues are decreasing while the cost of using the technology is falling rapidly.
There is no perfect way to detect deepfake content, but here are four handy tips:
- Details. As good as deepfake technology is becoming, there are still bits it struggles with. Particularly, fine details within videos, such as hair movement, eye movement, cheek structures and movement during speech, and unnatural facial expressions. Eye movement is a big tell. Although deepfakes can now blink effectively (in the early days, this was a major tell), eye movement is still an issue.
- Emotion. Tying into detail is emotion. If someone is making a strong statement, their face will display a range of emotions as they deliver the details. Deepfakes cannot deliver the same depth of emotion as a real person.
- Inconsistency. Video quality is at an all-time high. The smartphone in your pocket can record and transmit in 4K. If a political leader is making a statement, it is in front of a room full of top tier recording equipment. Therefore, poor recording quality, both visual and audible, is a notable inconsistency.
- Source. Is the video appearing on a verified platform? Social media platforms use verification to ensure globally recognizable people are not mimicked. Sure, there are issues with the systems. But checking where a particularly egregious video is streaming from or being hosted will help you figure out if its real or not. You could also try performing a reverse image search to reveal other locations where the image is found on the internet.
Tools for Spotting and Preventing Deepfakes
You’re not alone in the fight against spotting deepfakes. Several major tech companies are developing tools for deepfake detection, while other platforms are taking steps to block deepfakes permanently.
For example, Microsoft’s deepfake detection tool, the Microsoft Video Authenticator, will analyze within seconds, informing the user of its authenticity (check out the GIF below for an example). At the same time, Adobe enables you to digitally sign content to protect it from manipulation.
Platforms like Facebook and Twitter have already banned malicious deepfakes (deepfakes like Will Smith in The Matrix are still fair game), while Google is working on a text-to-speech analysis tool for countering fake audio snippets.
If you want to brush up on your fake media detection skills, check out our list of fake detection tests for spotting and learning.
Deepfakes Are Coming—And They’re Getting Better
The truth of the matter is that since deepfakes hit the mainstream in 2018, their primary use is to abuse women. Whether that is creating fake porn using a celebrity face or stripping the clothes from someone on social media, it all focuses on exploiting, manipulating, and degrading women around the world.
There is no doubt an insurrection of deepfakes lies on the horizon. The rise of such technology poses a danger to the public, yet there is little recourse to stop its march forward.
About The Author