Deepfakes and Disinformation
Deepfakes are synthetic media created using digital techniques to convincingly replace the likeness of one person with another. According to iProov’s March 2023 survey, 71% of global respondents don’t know what a deepfake is, and 43% admit they wouldn’t be able to spot one.
In today’s world, anyone with the ability to create deepfakes can disseminate misinformation and influence the masses to behave in a way that advances the personal agendas of the fakeres.
Most often, the “victims” of deepfakes are well-known individuals, influential politicians, and leaders of nations.
We have all heard of a type of fraud where someone poses as a relative or friend, asking for a large sum of money, claiming to be in an urgent situation. We understand that the account is compromised, and trusting it is unwarranted.
Now, imagine that your relative sends you a video asking for help. In everyday life, deepfakers could create personalized video clips, showing a relative pleading for a significant amount of money to help them out of an emergency. They could send these videos to unsuspecting victims, deceiving the innocent without arousing suspicion.
Types of Disinformation
War in Ukraine keep showcasing how conflict times become a fertile ground for the spread of disinformation: manufactured and misleading news, fabricated content, social media bots. The whole range of fakes, from manipulated images to advanced deepfake videos. We’ve all heard about the fake President Zelenskiy video – created by Russian operatives, where he was urging the Ukrainian forces to surrender – a message aimed at demoralizing Ukraine’s military and spreading confusion among the international community.
Disinformation manifests in numerous forms, let’s explore different types of fakes with concrete examples to better understand the breadth and depth of disinformation campaigns.
- Fabricated Content – or as you can name it easier “fake news” – completely false information created to deceive. Since 2014 there was a significant increase in disinformation spread from Russian side to justify the invasion of Ukrainian territories. The campaigns included fake stories about the Ukrainian army committing atrocities against own citizens, the denial of Russian troops in Crimea, the false accusation of genocide against Russian speakers.
- Manipulated Content – spreading information by altering facts, images and context. It’s widely used in the political arena to impact electoral outcomes. Consider the incident involving the video of Speaker Nancy Pelosi that circulated in 2019, the video was edited to slow down her speech making it seem as if she was under the alcohol. The result – public mistrust to the candidate despite the was debunked later
- Imposter Content – fake accounts or websites pretending to be real organizations or people to share false information.
Imposter content is deeply intertwined with cybercrime, serving as a major key for fraudulent activities in the business world. The main goals are to deceive, manipulate, steal sensitive information and get financial benefits. Phishing emails are the most popular, and unfortunately effective, tactic – According to IBM, phishing was one of the top attack vectors in cybercrime at 16%.
A real-world example of this tactic we could observe back in 2020, when cybercriminals launched a phishing campaign pretending they are from the World Health Organization amidst the COVID-19 pandemic. Emails were asking for donations for a fake COVID-19 Response Fund, directing recipients to fraudulent portals designed to steal personal financial information.
- Misleading Headlines – Information that is not false but presented in a way that misleads, distorts, or deceives. Such headlines often looks like click bate, aiming to grab our attention. The danger lies in the fact that many people might only read the headline and immediately draw conclusions without engaging with the full article
- Use of Social Media Bots – automated accounts which are programmed to perform tasks such as posting content, liking, sharing, and following other accounts on social media platforms. Bots are often used to spread propaganda, push one side of conflict, and manipulate public opinion.
During the Russian invasion of Ukraine the trolls hired by Internet Research Agency (IRA) started using TikTok, spreading false stories about the war trying to make people doubt or questions what’s happening
Twitter has reported removing at least 75,000 suspected fake accounts linked to online Russian bots for spreading disinformation about Ukraine.
How deep fakes are created – Generative Adversarial Network
The main concept of the technology is facial recognition. If you’ve ever “tried on” masks on TikTok or Instagram, you are already familiar with features that replace facial features or apply filters to alter your facial appearance.
Deepfakes are similar but much more realistic. Fake videos can be created using machine learning techniques known as deep learning, which involves neural networks. Deepfake technology employs deep learning algorithms to simulate the actions and mannerisms of a person.
A computer program analyzes numerous photos of an individual, gathers data, and reproduces the image. Specialists then overlay this created photo or digital “mask” onto a video. Additionally, the program can synchronize voice, gestures, and movements. As a result, viewers observe a character that closely resembles a real person.
What about voice reproduction?
The use of RVC (voice changer) comes into play. RVC stands for “Retrieval-based Voice Conversion,” a technology that employs a deep neural generative network to transform a narrator’s voice into another voice. This technology is based on the VITS model (“Voice To Voice AI”), the most advanced cross-lingual text-to-speech conversion system, enabling the creation of voices based on any original voice in any language.
To start, a dataset needs preparation, which serves as the training material for the model. The more audio recordings with different emotions and tones, the more realistic the deepfake will be, as the AI learns to reproduce the voice on an emotional level. For example, if a guy wants to change his voice to a female one, or a girl wants to alter her voice to a male one, pitch transformation is used to make it as convincing as possible.
Recent Case studies of deepfake use
Here are recent case studies that showcase the use of applications of deepfakes, ranging from geopolitical disruptions to personal impersonation and the spread of disinformation.
Political Manipulation:
- October 2023, the UK’s opposition Labour Party leader, Keir Starmer, was targeted in a deepfake campaign. The video of him making controversial statements spread across the Internet, where Starmer was endorsing policies that were contrary to his and the Labour Party’s public positions, resulted in confusion among the public
- January 2024, a deepfake video of former dictator Suharto who died in 2008 has gone viral where Suharto was seen requesting Indonesians to vote.The video has been viewed at least 4.5 million times on social media platforms and has provoked controversial reactions regarding the use of AI in the election process.
- June 2022 deepfake with Vitaliy Klitschko – the Mayor of Kyiv, like he has spoken with mayors of Berlin, Madrid and Vienna. These calls were quite realistic, as mayors commented, causing concern about the security of digital communications and the potential for misinformation.
For an overview of recent political deepfakes, you can check the Political Deepfakes Incidents Database – this resource tracks the spread of digitally altered audio, images, and videos as generative AI technologies become more common in the political scene.
Financial Frauds and Business Cases:
- February 2024 – A Chief Financial Office worker during a meeting, that turned out later to be a fake, was tricked into sending 200 million Hong Kong dollars (about $25.6 million) as told during the call. The incident was reported by Hong Kong police.
- In 2019, a UK energy company’s CEO was tricked into transferring €220,000 to a Hungarian supplier. He thought he was following requests from the CEO of their parent company in Germany, orders were very urgent and referring to secret agreements.
- Indeed, deepfakes can also pose a threat to businesses and the business sector as a whole. During online job interviews, companies are exposed to the risk of deepfakes when dealing with potential candidates. Recently, the FBI emphasized a growing trend due to the shift to remote work, where criminals have been using deepfakes to pose as job applicants in American companies. Perpetrators have been stealing the identification data of U.S. citizens to gain access to company systems.
Deepfakes for misogyny
Sure, the deceptive technology of deepfakes is also used for misogyny. Most of the current legal and ethical issues related to deepfakes typically involve the non-consensual (not agreed upon by both parties) dissemination of intimate images. Specifically, the application of deepfake technology to create fake pornographic content involving women.
To create such content, only the face of any woman is needed along with corresponding video footage. Many well-known actresses, influencers, and streamers have encountered this issue. For example, on January 30, 2023, Twitch streamer Atrioc apologized in a video after being caught during a stream with an open tab in his browser displaying a website featuring non-consensual pornographic content of female streamers created using deepfake technology.
One of the streamers featured in those disturbing videos responded to the incident on Twitter, writing: “Being ‘nude’ against your will SHOULD NOT be part of my JOB.”
How to recognize a deepfake?
Here you can find some recommendations from the Europol Innovation Lab:
- The number of flashes can help you determine if it’s a natural person or a deep fake, as deep fakes flash less often, sometimes even involuntarily or not naturally;
- The face and body often indicate a fake, a discrepancy between body and face proportions, or between facial expressions and body movements or poses;
- The video lasts only a few seconds, as a quality fake requires several hours of work and algorithm training;
- The sound of the video does not match the image, especially lip movements, or there may be no sound at all;
- There may be blurriness inside the mouth, indicating a false image. Deepfake generation technology can’t produce the tongue, teeth and mouth. Ultimately, as with all threats associated with the digital world, judgment is the critical link we need to strengthen the most. It’s essential for people to be wary of suspicious content and to be adept at spotting deepfakes.
We can no longer trust our eyes as before – the truth that we can’t deny. Deepfakes and the spread of disinformation have deeply changed our digital landscape, threatening the very foundation of public discourse and undermining trust in the media. The complexities of modern warfare and cybersecurity only add another layer of urgency.
Preventing the malicious use of deepfakes requires a multifaceted approach, including the development of robust detection tools, public awareness campaigns, and legal frameworks that address the potential criminal implications of synthetic media. It is crucial for individuals and organizations to remain vigilant, verify information, and adopt security measures to protect against the evolving threat landscape associated with deepfake technology.