It’s Time to Stop Debunking AI-Generated Lies and Start Identifying Truth

Perfect storm: rapid advancements in the realm of synthetic media have made it much cheaper and faster to create realistic fake content

Perfect storm: rapid advancements in the realm of synthetic media have made it much cheaper and faster to create realistic fake content. Image: TensorSpark / Adobe Stock (Generated with AI)


As the UK hosts a two-day AI security summit, the authors reflect on how the technology aids fake news and narratives – particularly in the run-up to 2024, a crucial year for elections in many Western democracies.

In July 2017, researchers at the University of Washington used AI to make a convincing video of former President Barack Obama giving a speech that he never gave. At the time it seemed novel, but perhaps nothing more consequential than a hacker’s parlour trick. Sadly, it heralded rapid advancements in the realm of synthetic media that few could have predicted. AI experts now estimate that nearly 90% of all online media content may be synthetically generated by 2026. For the first time in the history of digital media, realistic fake content is now cheaper and faster to create than reality, and the consequences for national security as well as civil society are simultaneously both alarming and hard to fathom.

The real impact that fake content can have is staggering. In May 2023, investor confidence was shaken amid social media-fuelled reports of a potential terrorist attack near the Pentagon, and the US stock market slid considerably. In that case, the image was easy to debunk, and investor confidence rapidly returned. Repeat the event with a more sophisticated set of tools, however, such as a fake presidential speech and a coordinated influence campaign to spread the lie across many social media platforms, and the results could have been far more dramatic than a stock dip. Indeed, synthetic hoaxes are now seen as an important driver of international events. Prior to the Russian reinvasion of Ukraine in late February 2022, the US revealed a Russian plot to spread deepfake content (media created or manipulated synthetically with the help of AI) as a pretext for the invasion.

The case of Russia can also be used to illustrate the threat to civil society: that people can believe anything or, caught in the miasma of competing narratives online, simply choose to opt out and believe nothing at all. As journalist Peter Pomerantsev points out in his excellent book Nothing is True and Everything is Possible, authoritarian governments such as Russia increase their power when their citizens are confused and disoriented. In the West, a lack of confidence that anything can be true is a problem for a great many reasons, not least because trust in government is at historic lows at the same time as governments are moving their public-facing communications online, and especially to social media. Consider a public safety scenario in which a governor issues an emergency evacuation order in advance of a powerful hurricane, or a public health official gives a warning about a quickly spreading pandemic. Could these important warnings be identified by a majority of people as belonging to the 10% of truth remaining on the internet, or would they be dismissed by citizens in danger as fake news, a political hoax, or even a prank? What can be done? Rooting out fake news and combatting automated propaganda is an important contribution to societal resilience, but we must look ahead to the next challenges as well.

The current solutions to address mis- and disinformation are not up to the task. We can’t count the number of times we have advised students, policymakers and the general public to combat mis- and disinformation on the internet by thinking critically, being skeptical and not reflexively reposting content without fact checking. That recipe is now incomplete. It is clear that the scale of the problem requires technological solutions too, and organisations around the world are investing in ways to quickly identify fake media. However, as technology continues to progress, this problem will soon be reversed, and the hunt for fake media will need to be replaced with verification of truth. In other words, instead of trying to weed out what is fake, we will need to identify ways to validate a truth among lies. This would involve a radical reframing of both the problem and potential solutions.

quote
For the first time in the history of digital media, realistic fake content is now cheaper and faster to create than reality

Currently, social media platforms (and users themselves) are scrambling to tag and label inauthentic content. Soon this will be akin to using an umbrella to block individual raindrops during a monsoon. TikTok, for instance – like most social media companies – has policies requiring labelling synthetic media, but a recent report from misinformation monitor NewsGuard found the implementation of TikTok’s policy wanting. Likewise, fact-checking organisations are already struggling to keep up with the amount of disinformation online. By 2026, their backlog of digital nonsense will keep them busily debunking falsehoods far into the distant future. Turning the status quo equation on its head means that instead of identifying fake news polluting a stream of otherwise legitimate content, we must realise that the stream will soon be the lies, and the truth will need to be plucked out.

It is worth noting some antecedents. In the early 2000s, tools such as Photoshop allowed individuals to edit photos more quickly, and social media made it easier to reach a wide audience. In 2008, Iran digitally altered a photograph of rocket launchers to remove one that – rather embarrassingly – failed to fire, with the intent of making itself appear more powerful and capable than it really was. Still, Photoshop was not scalable and could not create fake media from scratch. It had to start with a truth. In the past few years, though, critical advances in generative AI (computer algorithms capable of generating new media from text prompts) are increasing the threat of what has been called an information apocalypse. As with all technological advancements, these developments have been rapidly democratised over time. Now anyone can produce their own high-quality disinformation with algorithms that are already freely available online. Through programs such as FaceSwap, it is straightforward to convincingly put a face on another body. There is no putting this genie back in the bottle, and no amount of ethical use manifestos published by developers is going to trammel such technology.

The AI genie continues to amaze, and regulators (much less university professors) simply can’t keep up. Before November 2022, when ChatGPT was released, the idea of a computer writing a college-level essay in seconds would have been seen as science fiction. This was a revolutionary step up from tools that could, at best, fix grammar and punctuation. At the same time, software that could create images from text, such as DALL-E and Midjourney, became available to the public. These image generation tools could, with a simple prompt that required no technical knowledge, create 1,000 hyper-realistic photos before a human could develop one. At first, critics of the technology pointed out inaccuracies in the deepfake content, hoping perhaps in vain that the rationality of the human brain was still superior to the computer. In March 2023, the Washington Post published an article providing readers with tips on how to identify deepfakes. One of the examples was to ‘look at the hands’, since early generative AI tools struggled with making realistic human hands. That same month, however, another article was published by the same newspaper titled ‘AI can draw hands now’. Trying to identify deepfakes by looking for visual mistakes is a losing strategy. According to a report published by the NSA, FBI and CISA, attempts to detect deepfakes post-production are a cat-and-mouse game where the manipulator has the upper hand – not to mention that people want to see what they already want to believe, which is the primary reason that ‘cheap fakes’ are just as dangerous as deepfakes. Confirmation bias means that people don’t need much convincing to see what they want to see. The pair are a toxic brew.

According to DeepMedia, a company contracted by the US Department of Defense to help detect synthetic media, the amount of deepfakes has tripled in 2023 compared to 2022. How do people know what to believe and trust? If the deepfake is just as realistic as a photo taken by a professional camera, how do organisations prove authenticity? For each photo taken by a journalist, thousands of equally realistic fakes could be made and distributed. This article aims to highlight that very recent technological advances are leading to a perfect content storm, where lies are far cheaper to produce than truths, but just as convincing. The spread of deepfakes is creating an environment of mistrust. A July 2023 report published by members of Purdue University’s Department of Political Science argued that an increase in the amount of fake content makes it easier for someone to challenge the validity of something that is actually true. They called this the Liar’s Dividend. As media becomes saturated with manipulated images and videos, it becomes harder to identify what is trustworthy. Being able to prove that something is fake loses its value when most of the content is synthetic already. The greater and more critical challenge is validating what is true.

quote
As media becomes saturated with manipulated images and videos, it becomes harder to identify what is trustworthy

The problem of labelling media content as trustworthy is complicated. As deepfakes become increasingly sophisticated, it will become nearly impossible for individuals – even those trained to look for peculiarities – to distinguish real from fake. As a result, organisations will need to lean more heavily on technical solutions to label and verify media. Why is it also difficult, though, for computers to tell the difference between a photo taken by a camera and a deepfake created by AI? All digital media is, at a technical level, just a file on a computer. Comprised of 1s and 0s, this file is displayed on a screen to a person. The computer has no notion of fake or real. This problem has many similarities with the art world and the challenge of proving that a painting was made by a famous artist and not a copycat. For every real Picasso, there may be 1,000 replicas. Museums and galleries do not waste their limited resources trying to prove the inauthenticity of the copies, though; they focus on validating and maintaining the truth through a concept called provenance. Provenance is the recorded origin and history required for a piece of art to provide viewers with trust and confidence in its authenticity. Even if the methodologies are different for the digital world, it may prove a useful model for seeking and identifying authenticity instead of forever debunking fakes.

The cyber security field already uses capabilities such as encryption and hashing to verify passwords and protect digital communications, but these need to be applied to media in a way that is easily understood and trusted by content consumers with limited technical backgrounds. Organisations such as the Content Authenticity Initiative (CAI) are working to use cryptographic asset hashing to serve as digital provenance online. This project aims to provide a tamper-proof way to validate the origin of images and videos, even while they are shared across social media and news platforms. The CAI aims to meet the technical standards developed by the Coalition for Content Provenance and Authenticity, released in 2022. While these efforts are heading in the right direction, they are not foolproof, and depend heavily on an increased socio-technical understanding of digital media. Additionally, allowing organisations to manage the trustworthiness of media comes with its own concerns. Totalitarian governments will no doubt develop their own ‘Content Authenticity Initiatives’ to self-validate what they want to be believed.

Deepfakes are still a young technology. While they have not single-handedly disrupted an election as some might have feared, their use is increasing, and the technology is advancing rapidly. While most deepfakes are currently images or altered videos, the ability to create whole new scenes from a prompt is already here. With the 2024 US presidential election approaching, deepfakes and other ‘fake news’ will likely be on the minds of both candidates and voters. Former CEO of Alphabet Eric Schmidt has warned that mis- and disinformation, through the use of AI in particular, could lead to chaos in the 2024 election. The solution is both technical, by shifting from identifying deepfakes to validating truths, and societal, through technical education and media literacy. For decades, people were taught to trust their senses. Now, seeing and hearing can no longer be believing.

This analysis is solely that of the authors. It does not represent the position of the US Government, Department of Defense, US Army, United States Military Academy, RUSI or any other institution.

Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.


WRITTEN BY

Professor David Gioe FRHistS

View profile

Alexander Molnar

View profile


Footnotes


Explore our related content