• Commsdart
  • Posts
  • Generative AI and deepfakes. How artificial intelligence tools will sow disinformation

Generative AI and deepfakes. How artificial intelligence tools will sow disinformation

As AI technology advances, so does its potential for misuse.

As the technology behind generative artificial intelligence (AI) continues to advance, so too does the potential for its misuse. One particularly concerning application of this technology is the creation of deepfakes, which are increasingly being used to spread disinformation online. 

With deepfakes becoming not only easier and cheaper to produce but more realistic and harder to determine if they’re fake, the potential for them to be used for malicious purposes is growing rapidly.

In recent months several instances of deepfakes have been created using generative AI. Some are done for fun and others are created for more malicious purposes. 

Fake news online is already a huge issue which has led to serious concerns about the authenticity of digital media and its impact on public discourse and democracy. With generative AI this trend will only worsen as new AI tools continue to develop and made available to anyone. 

This should bring concerns about deepfakes to the forefront of public discussions, and raises serious questions: What is the impact of AI, deepfakes and disinformation, and what is the significance of deepening commercialisation in AI and deepfake technology?

We already know that AI will impact PR significantly but as a net benefit and there is an increasing list of AI tools for PR but what about the tools that can be used to cause disinformation?

In this article, we will explore how generative AI technology is fueling the spread of deepfakes, causing harm to the public discourse and the potential consequences of this trend for our society.

First, it’s best to clarify what generative AI and deepfakes are. 

Table of Contents

What is generative AI? 

Generative AI is a subset of artificial intelligence, in which a machine is capable of creating new data or content, such as images, sound files, and even digital art. This kind of AI is referred to as “generative” because it can generate new data that is unique and original, as opposed to simply processing or analyzing existing data.

Generative AI systems are designed to learn from patterns and data sets, enabling them to make predictions and create new content that is similar to what they have learned.

This approach can be compared to the way humans learn and create, as it enables machines to work with creative uncertainty and come up with something new. Examples of applications for generative AI include creating unique artworks, generating realistic images and generating text and articles. 

The two most popular generative AI models are: 

  • Transformer-based models — AI such as GPT that uses information gathered from the internet to create text-based content from articles to press releases to whitepapers

  • Generative Adversarial Networks or GANs — AI that creates images and multimedia using input from both imagery and text

GANs tend to pose the most risk when it comes to generating disinformation with deepfakes because they can create highly realistic images that can be difficult to tell they were created by an AI.

AI tools used to generate deepfakes 

Midjourney is a GAN AI that has been developed for generating high-quality images. This AI algorithm employs a combination of neural networks to create realistic images of objects, people, and even landscapes. 

Dall-e from OpenAI is a GAN that can generate unique images from textual inputs. It was named after the surrealist artist Salvador Dali and the movie character Wall-E. Dall-e is trained on a large dataset of images and can generate a wide range of images, from realistic to abstract, based on textual prompts. 

Stable Diffusion is a GAN that has been developed to generate realistic images and videos. The key feature of Stable Diffusion is its ability to stabilize the transition between two different states of the image; for example, it can smoothly transition from an image of a person with their eyes closed to an image with their eyes open.

What are deepfakes? 

Deepfakes are a form of digital forgery that use artificial intelligence and machine learning to generate realistic images, videos, or audio recordings that appear to be authentic but are actually fake.

These manipulated media files are created by superimposing one person’s face onto another’s body or by altering the voice, facial expressions, and body movements of a person in a video.

With the advancements in deep learning algorithms, it has become easier to create deepfakes, which can be used to spread misinformation, propaganda, or to defame someone. Deepfakes can be created using open-source software or customised tools and can be easily spread due to the viral nature of social media. 

Examples of deep fakes created by generative AI

In recent months we’ve seen several deepfake examples created by generative AI going viral in social media. The imagery created by this technology is so realistic it’s fooled millions of people around the world. Some recent deepfake examples are listed below.

The pope in a Balenciaga-styled puffa jacket

Fake. The Pope is not repping Balenciaga

In March 2023, a photo of Pope Francis looking ‘dripped out’ in a white puffer jacket went viral on social media. The 86-year-old sitting looked like he had been given a custom-made puffa jacket by Balenciaga. The image was shared far and wide and covered in numerous publications. But there was just one problem: The image was a deepfake created in Midjourney.

A leaked photo of Julian Assage looking unwell in prison

Fake. This is not a real picture of Julian Assange

Again in March 2023, an apparently leaked photo of Wikileaks founder, Julian Assage, was shared far and wide on social media. People who believe the photo was genuine posted their outrage but a German newspaper interviewed the person who created the image who claims he did it to protest how Assange has been treated. Although critics pointed out that creating fake news was not the appropriate method of doing so. Another deepfake example that transcended online to off.

Trump getting arrested

Fake. This is an AI-generated image

Again in March 2023 (which, looking back, was a big month for deepfake examples), AI-generated images of Donald Trump being arrested were circulating online. This particular deepfake was created by not just one individual but many. This particular deepfake didn’t fool as many of the other two examples but some people were duped and shared them on social media believing they were real.

The Pentagon bombed

Fake. Even this bad deepfake wiped $500 billion of US stocks

In May this year, an AI-generated deepfake image of a bomb at the Pentagon exploding went viral on Twitter and causes US markets to plummet. The S&P 500 stock index fell 30 points in minutes resulting in $500 billion wiped off its market cap.

After the image was certified as fake the markets rebounded but it showed the impact that deepfakes can cause. Certified accounts on Twitter didn’t help the situation either as many of them shared the image as if it was real and were rightfully criticised for it.

DeSantis campaign shares deepfakes of Trump

Fake. The three images of Trump hugging Fauci are deepfakes

Experts identified the use of AI-generated deepfakes in an attack ad against rival Donald Trump by the campaign endorsing Ron DeSantis as the Republican presidential nominee in 2024.

On June 5th, the “DeSantis War Room” Twitter account shared a video that highlighted Trump’s endorsement of Anthony Fauci, the former White House chief medical advisor and a key figure in the US response to COVID-19. In right-wing politics Fauci has garnered significant opposition, and the intention of the attack ad is to strengthen DeSantis’ support base by portraying Trump and Fauci as close collaborators.

Cheating politicians

Fake. This is not Joe Biden and Ron DeSantis cheating

An artist named Justin T. Brown who created AI-generated images of politicians cheating on their spouses to highlight the potential dangers of AI. Brown’s intention was to initiate a conversation about the misuse of AI technology.

He shared the images on the Midjourney subreddit, but soon after, he was banned from the platform. Brown expressed conflicting feelings about the ban, acknowledging the need for accountability but questioning the effectiveness of regulating content.

How generative AI deepfakes will impact politics

The proliferation of deepfakes in politics is a given at this point. As we navigate this new terrain, it is crucial to understand the multifaceted ways in which AI-generated deepfakes will reshape the political landscape and online discourse.

From the creation of hyper-realistic deepfakes to the generation of AI-driven propaganda, these technological advancements present both challenges and opportunities.

The implications are far-reaching, influencing everything from the authenticity of political discourse to the integrity of democratic processes.

A recent article by the Financial Times discusses the growing threat of AI-powered disinformation and its potential to disrupt democratic processes, particularly elections.

The piece highlights an incident in Slovakia where a deepfake audio influenced public opinion right before an election. This example underscores the increasing sophistication and accessibility of AI tools in creating realistic, hard-to-detect fake media.

The article also touches on the challenges social media platforms face in moderating such content, amidst a backdrop of political polarisation and diminishing public trust in institutions. It raises concerns about the upcoming global elections in 2024, where half of the world’s adult population is expected to vote.

Here, we explore the key aspects of how AI is set to impact politics, drawing insights from comprehensive analyses and expert opinions.

  1. Propagation of deepfakes: The ease of spreading false information about political figures and events using deepfake technology could potentially mislead the public and influence election outcomes.

  2. AI-generated text for propaganda: Beyond visual content, AI’s capability to generate persuasive and believable text can be exploited for propaganda, enabling the rapid spread of text-based disinformation and tailored misinformation campaigns.

  3. Scale and efficiency of disinformation: AI technologies allow for the generation and dissemination of propaganda at an unprecedented scale and efficiency. This could lead to an overload of information channels, making it difficult for the public to discern credible information.

  4. Personalized propaganda and phishing attacks: AI can be used for highly targeted propaganda efforts, including personalized phishing attacks aimed at influencing or manipulating individuals based on their online behaviour and preferences.

  5. Rapid technological advancements: The swift development of AI and conversational models means that the tools for creating and spreading disinformation are becoming more advanced and accessible, increasing the potential for misuse in political contexts.

  6. Challenges in regulation and detection: The use of AI in politics presents significant regulatory challenges. Developing norms and policies to manage AI-generated deepfakes, as well as techniques to detect such content, is becoming increasingly vital.

  7. Impact on democratic processes: The infusion of AI in political communication can undermine democratic processes and public trust in information, especially if AI-generated content floods information channels and skews public discourse.

  8. Need for societal resilience: The rise of AI-powered propaganda highlights the importance of enhancing societal resilience through improved media literacy and the development of tools to help discern and contextualize AI-generated information.

  9. Strategic communication adaptation: Political figures, parties, and governmental bodies will need to adapt their communication strategies to swiftly address and debunk misinformation generated by AI, focusing on transparency and authenticity.

  10. Industry and ethical responses: The diverse responses from AI and tech companies in regulating political content highlight a need for an industry-wide ethical stance on the use of AI in politics.

These points illustrate a complex landscape where AI’s role in politics is multifaceted, necessitating a coordinated approach from governments, industry, and civil society to address the challenges posed by these emerging technologies.

And, of course, in politics often people will share deepfakes even when they know it is.

Where deepfake technology is headed

To give you an idea of the incredible creativity in deepfakes, this TED discussion with AI developer, Tom Graham, provides an overview of the existing deepfake technology available and where it’s heading.

Tom’s company, Metaphysic, gained popularity with the release of a fake Tom Cruise video that received billions of views on TikTok and Instagram. They specialise in creating artificially generated content that looks and feels like reality by using real-world data and training neural nets. This is more accurate than VFX or CGI and helps create content that appears natural.

One of their examples includes transporting the singing voice of a woman singing in Spanish to Aloe Blacc’s face, making it look and sound as if he’s singing in Spanish. This technology could eventually allow anyone to speak any language naturally, and creating such content will become easier over time.

Metaphysic is also capable of processing live video in real-time, which is at the cutting edge of AI technology. They demonstrate this by replacing the interviewer’s face with Chris’s in a live video, and even replicating the voice. They can apply this technology to anyone, as demonstrated by Sunny Bates in the audience.

How to combat AI-generated deepfakes 

Efforts are being made to develop technologies to detect and prevent deepfakes, but their effectiveness remains limited as the technology evolves rapidly. 

One way to combat AI-generated deepfakes is through the development of advanced detection technologies. These technologies can analyse patterns in audio and video data to identify signs of manipulation. Another approach is through the creation of a digital watermarking system that can verify the authenticity of media content.]

Google has recently launched a new tool called ‘About This Image’ to help people spot fake AI images on the internet. The tool will provide additional context alongside pictures, including details of when the image first appeared on Google and any related news stories. This new feature will help people identify hyper-realistic pictures from the real ones, including those generated using tools such as Midjourney, Stable Diffusion, and DALL-E.

The tool is intended to surface news stories about fake images that have been subsequently debunked and is designed to tackle warnings that new AI tools could become a wellspring for misinformation and propaganda.

Perhaps the best way to combat AI-generated deepfakes is to educate the public about their potential harm which may be crucial in preventing their spread. It is important to be vigilant when consuming media, verifying its source and contextual information, and using critical thinking when interpreting its contents. With a multifaceted approach, we can deter the spread and harm caused by AI-generated deepfakes. 

The future of generative AI and deepfakes

The future of AI and deepfakes is a controversial topic. The genie is out of the bottle and the technology is only going to get more realistic. Soon it won’t be just deepfake images but deepfake videos too. Voice cloning technology has already made significant progress, and there is no doubt that it will advance further in the coming years.

This raises serious concerns about the potential misuse of deepfake technology, from political propaganda to personal vendettas. Deepfakes have already been used to create fake pornographic videos, causing harm to the individuals involved.

While there are efforts to develop countermeasures to detect and prevent the spread of deepfakes, it will be a constant battle between the creators and those who aim to stop them.

As the technology advances, the lines between reality and fake will become increasingly blurred, making it more critical than ever to develop measures to identify and combat the spread of deepfakes.