The role of deepfakes in the year of democracy, disinformation, and distrust

Bad actors armed with AI tools to create deepfakes are coming for businesses

· TechRadar

News By Philipp Pointner published 1 October 2024

(Image credit: Shutterstock / meamorworks)

AI-generated misinformation and disinformation are set to be the biggest short-term global risks of the year, according to the World Economic Forum. With half of the global population participating in elections this year, misinformation in the form of deepfakes poses a particular danger to democracy. Ahead of the UK General Election, candidates were warned that AI-generated misinformation would circulate, with deepfake video, audio and images being used to troll opponents and fake endorsements.

In recent years, low-cost audio deepfake technology has become widely available and far more convincing. Some AI tools can generate realistic imitations of a person’s voice using only a few minutes of audio, which is easily obtained from public figures, allowing scammers to create manipulated recordings of almost anyone.

But how true has this threat proven to be? Has the deepfake threat proven overhyped, or is it flying under the radar?

Philipp Pointner

Deepfakes and disinformation

Deepfakes have long raised concern in social media, politics, and the public sector. But now with technology advances making AI-enabled voice and images more lifelike than ever, bad actors armed with AI tools to create deepfakes are coming for businesses.

In one recent example targeting advertising group WPP, hackers used a combination of deepfake videos and voice cloning in an attempt to trick company executives into thinking they were discussing a business venture with peers with the ultimate goal of extracting money and sensitive information. While unsuccessful, the sophisticated cyberattack shows the vulnerability of high-profile individuals whose details are easily available online.

This echoes the fear that the sheer volume of AI-generated content could make it challenging for consumers, to distinguish between authentic and manipulated information, with 60% admitting they have encountered a deepfake within the past year and 72% worrying on a daily basis about being fooled by a deepfake into handing over sensitive information or money, according to Jumio research. This demands a transparent discourse to confront this challenge and empower businesses and their end-users with the tools to discern and report deepfakes.

Fighting AI with AI

Education about how to detect a deepfake alone is not enough, and IT departments are scrambling to put better policies and systems in place to prevent deepfakes. This is because fraudsters are now using a variety of sophisticated techniques such as deepfake faces, face morphing and face swapping to to impersonate employees and customers, making it very difficult to spot that the person isn’t who you think they are.

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors