Since 2017, deepfakes have circulated on the web as an uncanny form of multimedia. And with celebrities swapping faces or former presidents evaluating video games, there’s no shortage of examples out there.
But, as with anything in cyberspace, deepfakes also pose a threat to not just cybersecurity, but the very foundation of trust and democracy. And, if left unchecked, recent advancements and acceleration in AI technology could even have us doubting our very eyes, especially as the world heads into what’ll no doubt be a politically charged new year.
What are deepfakes?
Deepfakes are a type of synthetic media that has been wholly created or partially edited with artificial intelligence and machine learning. Often, they show a person doing or saying something they never actually did or said. But deepfakes aren’t just limited to people. Deepfakes can be made from or about anything. And they are particularly concerning because they’re often highly realistic, highly believable multimedia.
For the sake of this article, we’ll be referring to tactics and examples that only include people.
Recent advancements in AI have made deepfakes easier to develop, with many deep learning tools openly available on the market today. This widespread availability and ease of use make the issue even more pervasive, but according to the US Department of Homeland Security, it’s not the technology that’s dangerous; it’s people’s “natural inclination to believe what they see.”
Let’s take a step back. What is deep learning?
Deep learning is a machine learning (ML) technique. Once an AI model is trained on data, it can learn how to “think” at deeper and deeper levels to extract information it couldn’t before. It allows systems to automatically cluster otherwise unstructured data and make predictions with incredible accuracy.
The more robust and complete that training, the more effective the model becomes. As a result, the deepfake generated will be more realistic.
What forms do deepfakes commonly take?
Deepfakes take the form of synthesized or altered text, images, audio recordings, or videos, which can be used in combination with one another to conduct sophisticated social engineering attacks and build misinformation campaigns.
Some tactics include face swapping, lip synching, and full puppeteering of a video’s subjects.
Face swapping
Face swapping pre-dates AI/ML technology, as many could use Photoshop to alter images as early as the 1990s. With AI, it’s even simpler, as someone can use powerful Deep Neural Networks to encode the face of Person A onto Person B.
You’ve likely seen some more benign examples of these online, like celebrities transforming into one another.
Lip synching
Lip synching takes the audio from one file and maps it onto the mouth of another video’s subject, making it look like the video’s subject said something they never did.
In this situation, the creator of the deepfake can literally put words into someone else’s mouth using Recurrent Neural Networks.
Puppeteering
The puppet technique enables a user to move an individual in a way they never moved, whether that’s a facial movement or repositioning the entire body. In this type of deepfake, the user would leverage what’s known as Generative Adversarial Networks (GANs). In a GAN, a user has two ML networks, the generator and the adversary.
The first network is used to learn desired characteristics, and from there, it creates new examples based on that data. The adversarial network looks for flaws, rejecting the data it deems as fake. The networks then go back and forth, creating and improving the content until it’s “real,” according to the adversarial network’s tangential training.
Deepfake examples
As you can guess, there are a lot of potential applications for deepfake technology, some harmless, and others, not so much.
Benign deepfakes
Deepfake technology can be used to create stunning visual effects and simulations. Hollywood capitalizes on this technique to bring deceased actors back on our screens (think Paul Walker from The Fast & Furious or Peter Cushingin Star Wars: Rogue One). Hollywood also uses it to de-age actors, like Disney did with Harrison Ford in the latest Indiana Jones installment.
Another relatively harmless example is the “Balenciaga Pope” image that circulated the web in March 2023. The graphic took Twitter by storm when it first surfaced, and many on the platform “took it at face value,” but it’s entirely AI-generated using Midjourney.
While just a bit of fun for one Reddit user, the image still brings up important questions about what hyper realistic fabrications could mean for truth in today’s (mis)information age.
Twitter’s raucous reaction to the photo also confirms the DHS quote shared earlier in this blog post. It’s not necessarily about how real the deepfake appears to be; it’s about the fact that people tend to simply believe what they see. And as these examples get more realistic, not to mention easier to make, you can see some of the many ways they could impact political campaigns and business operations.
Deepfake dangers
The very nature of social media already makes it hard to discern fact from fiction. But when you couple the issue with deepfake technology, determining what’s real gets a whole lot harder. And when an adversary combines deepfakes with data stolen during a cyberattack, the combination creates a perfect maelstrom for misinformation.
In fact, deepfake fraud doubled from 2022 to the first quarter of 2023 alone, yet 71% of users admit to having no knowledge of deepfakes, an alarming statistic considering that many of the world’s countries are headed into an important, not to mention charged, election year.
Political campaign impacts
“With the widespread adoption of generative AI, we are likely to see AI supercharging election interference in 2024. From the creation of convincing deepfakes to an increase of targeted misinformation, the concept of trust, identity, and democracy itself will be under the microscope.”
- Shivajee Samdarshi, Chief Product Officer at Venafi
As the US heads into 2024’s presidential election, many top publications, and Venafi’s own experts, have expressed concern about AI being used in what’s being predicted as a coming “tsunami of misinformation.” CNN brought up the issue before 2020’s election, too, but accelerated advancements in AI have several particularly concerned this year.
Why the worry? One key use of deepfakes is controlling the narrative. One candidate, for instance, could use deepfakes of the other to heighten societal tensions, damage reputations, incite a politician’s followers, or undermine trust in the entire election process. And if you can’t trust the information in an election, democracy, by nature, can’t function.
Political deepfakes in the wild
Inside and outside the US, deepfakes have already impacted politics in the last year.
- Ukraine and Russia. In one fabricated example, Volodomyr Zelenskyy, Ukraine’s president, can be seen falsely telling his country to surrender to Russia. On the opposite side of that coin, you have Putin appearing to institute martial law and cry after Ukrainians invade Russia.
- US political ammunition. In June 2023, Ron DeSantis’ presidential campaign released a video that includes computer-generated images of Trump embracing Dr. Anthony Fauci, in a continued back and forth between the two candidates in regard to their responses to COVID-19. A month prior to this, an altered video of House Speaker Nancy Pelosi appearing to stammer through a news conference also made the rounds on social media.
Business ramifications
“Organizations and their employees may be vulnerable to deepfake tradecraft and techniques, which may include fake online accounts used in social engineering attempts, fraudulent text and voice messages used to avoid technical defenses, faked videos used to spread misinformation, and other techniques.”
– Contextualizing Deepfake Threats to Organizations, NSA, FBI, CISA
Business professionals, especially executives and leadership teams, are increasingly at risk of deepfake-related attacks. Commonly, in a business setting, deepfakes are used to impersonate members of these teams, often for financial gain or to access critical systems/data.
An adversary, for example, can implement a sophisticated social engineering campaign using several types of synthetic media, such as a phishing email, a fabricated voicemail, or even a manipulated video. These advanced combination attacks are hard to detect, even for the most determined professionals. And it’s one thing to receive a typo-riddled phishing email that’s purportedly from your financial leader; it’s another thing entirely to receive a related voicemail—or even a video call—that looks or sounds just like her.
In fact, one threat actor group made off with $35 million in a 2021 attack targeting a United Arab Emirates branch manager with emails designed to look like they came from a US-based lawyer about the acquisition of another company. In addition to the phishing emails, the adversary also used synthetic audio of the lawyer to make the ploy more believable, as it sounded exactly like someone the branch manager was already familiar with.
Other deepfake threats for businesses
The Department of Homeland Security has detailed other potential use cases for deepfakes in a business context, including falsified content for job interviews, customer impersonation, and even corporate sabotage—including but not limited to besmirching a product, a team, or an entire brand.
The Generative AI Identity Crisis: Emerging AI Threatscapes and Mitigations
Increasing reliance on detection and authentication
As deepfakes continue to become a more pervasive issue, social media scrollers must remain skeptical of the media they consume—and be constantly vigilant when spending time online. It’s like when you’re keeping your eyes peeled for those phishing emails: always err on the side of caution, rather than taking everything at face value.
To help with this, big tech companies like Meta, Google, and Microsoft have implemented new requirements for labeling AI-generated content as such. But X (formerly Twitter), stripped back its verification systems last year, leaving public officials susceptible to impersonators. Elon Musk has also removed the teams that fought misinformation and has restored the accounts of some extremists who were previously banned, leaving some to believe that Twitter will become a breeding ground for even greater fabrications in 2024.
And therein lies the problem. Lies, “fake news,” and sensationalism have become “something of a business model” in recent years, and basic verification systems are no longer sufficient.
Some experts believe that sharing originating sources could help. Others say encrypting AI-generated media in a way similar to NFTs, including digital signatures, and leveraging other watermarking practices will be crucial. Although, the US government has yet to provide any exact specifications or guidelines around the watermarking requirements mentioned in Biden’s 2023 Executive Order on AI.
This requirement for a digital signature also emphasizes the importance of secure, reliable cryptographic key management, as your keys play a crucial element in creating a digital signature.
Expect more regulation in this area going forward
Regulators are taking the issue of deepfakes seriously by criminalizing the non-consensual use of a person’s likeness, using data from a deceased individual, and placing a newfound focus on increasing privacy and transparency.
One such example of transparency is the #TrustedInfo2024 campaign that Minnesota is spearheading to help promote election officials as a trusted source of information. The Minnesota Secretary of State, Steve Simon, plans to meet with county and city officials to build and update a “Fact and Fiction” information page, especially as false claims and content, including deepfakes, begin to emerge.
“We hope for the best but plan for the worst,” says Simon.
Continue to educate yourself on deepfakes and developing scams
As the Pentagon, DARPA, and other organizations continue to work on detection and authentication efforts, getting, and staying, ahead of deepfake scams is a bit like getting accustomed to using burgeoning generative AI tools yourself.
To learn how to detect AI-manipulated and AI-generated content, it helps to see existing examples (like those celebrity ones mentioned before), so you can learn what to look for when any suspicious media crosses your news feed.
All that being said, be sure to check out the DHS’s guide to deepfakes to learn what to look for when assessing the legitimacy of video, audio, and text-based synthetic media.
Harness the power of AI to slay machine identity complexity in seconds
To maintain trust, the principles of identity have never been more important
If the foundation of trust isn’t in place, trust and democracy as we know it today may crumble.
In fact, CNN states that some people “already question the facts around events that unquestionably happened, like the Holocaust, the moon landing, and 9/11, despite video proof. If deepfakes make people believe they can’t trust video, the problems of misinformation and conspiracy theories could get worse. While experts told CNN that deepfake technology is not yet sophisticated enough to fake large-scale historical events or conflicts, they worry that the doubt sown by a single convincing deepfake could alter our trust in audio and video for good.”
That all sounds rather bleak, but by continuing to advance detection and authentication capabilities and educating ourselves on the dangers of deepfakes, we can all help to keep the web a safe, trusted, and reliable place to consume information. And trusted identity lies at the heart of the issue.
To learn more about how you can bolster trust in machine identities and manage them to your advantage, check out this page: https://venafi.com/machine-identity-basics/what-is-machine-identity-management/