AI in the Year of Elections: A Force to be Reckoned With?


Flooding the airwaves: concerns have risen that AI-driven misinformation campaigns could undermine confidence in democratic systems. Image: VideoFlow / Adobe Stock


Despite dire warnings, AI is unlikely to radically alter elections in 2024. But to ensure the benefits outweigh the risks, we need to take a holistic approach which strengthens societal resilience and enhances trust in democratic institutions and processes.

2024 is a ‘record year for elections’, with over 70 taking place throughout the year. Citizens will be casting their votes in the US, India, Indonesia, most likely in the UK, and for the European Parliament.

Novel campaign techniques are at the centre of election reportage and, given the hype around AI and the wider adoption of generative AI tools like ChatGPT, this year’s elections will be defined by such debates. Generative AI is of course already being used – both legitimately by political parties and actors, and illicitly by domestic and foreign actors wishing to disrupt and/or influence elections.

By no means is AI only a tool useful for those seeking to exert malign influence. To the contrary, the adoption of AI can also support free and democratic elections and help secure them against interference. Through the use of Natural Language Processing (NLP), AI can be used to create better and more accessible data on campaign financing. This has the potential to improve transparency and ultimately trust between voters, candidates and democratic institutions.

AI tools can also be used by electoral regulators, who often do an incredibly hard job, on a shoestring budget, with limited time. NLP has the potential to allow for near real-time analysis of election spending and campaigning, such that wrongdoing can be identified during an election as opposed to months after. AI can also enhance cyber security measures to keep campaign and electoral networks, emails and data protected. For campaigners, AI is a cost-effective tool for creating content or responding to individual requests, drafting speeches and managing campaigns.

But public perception rests primarily on the perceived risks AI poses to elections. Concrete examples of its use, such as in Turkey’s elections in 2023, election deep fakes in Slovakia, or the US Republican Party’s entirely AI-created election video, remain scarcely discussed. Instead, it is the fear of its role in the spread of disinformation that makes the headlines, with a majority of US citizens, for instance, thinking that AI will ‘increase 2024 election misinformation’.

AI: Force Multiplier or Game Changer?

Foreign interference through information control is by no means a new phenomenon. Recent history – especially during the Cold War – is littered with examples of active meddling in electoral politics. Even domestically, the use of misinformation was described by Hunter S Thompson in his coverage of the 1972 presidential election as ‘one of the oldest and most effective tricks in politics’.

quote
If the veracity of the campaign ecosystem is not trusted by large swathes of the populace, the democratic project could begin to crumble at an alarming pace

So, to what extent does the proliferation of AI represent qualitative and quantitative differences from established practices? Are troll farms and disinformation campaigns enabled by AI technologies force multipliers or even a game changer for election interference? Are sophisticated deep fakes even needed to spread disinformation when videos from online games are already being perceived as news? And what can we learn from previous instances of election interference in order to adopt AI in a manner that provides solutions to these challenges?

It is incredibly hard to measure the effects of disinformation campaigns during elections. But the most recent (and convincing) work suggests that fake news is more likely to simply reinforce existing partisan beliefs than it is to have a radical effect on voting behaviour. In other words, if we’re pre-inclined to think that a false statement is true, we might well believe that it is when it appears in our social media feeds.

If misinformation is not new, nor is it clear that it has much of an effect on the outcome of elections, why should we be worried? And why should we be any more worried about the use of AI than other malign electoral practices? The main concern lies in the speed and scale that AI tools provide. They can allow one to do an incredible amount at the stroke of a key.

The fear, then, is that misinformation campaigns can effectively flood the marketplace of ideas. This kind of activity could weaken trust in the accuracy and legitimacy of information and undermine confidence in the whole system. While elections are only one element of democracy, they are fundamental to its survival. If the veracity of the campaign ecosystem is not trusted by large swathes of the populace, the democratic project could begin to crumble at an alarming pace.

How to Tackle the Issue

Policymakers in jurisdictions like UK, the US and the EU are not oblivious to the problem. But the challenge for lawmakers is immense. Legislative processes are slow, and often trail technological advances. While the EU has reached agreement on its draft AI act, the UK continues to follow a voluntary approach of guiding principles for AI regulation. It is too early to say which will lead to success, but it is clear that policymakers on their own will not be able to contain the risks posed by AI to the election process.

quote
It is important that technology companies enact their pledge to watermark AI-generated content and share information on how AI technologies work in a transparent and accessible manner

Instead, engagement by the whole of society is needed, including close collaboration with the private sector to enable information exchange between sectors and foster mutual approaches that can be implemented in a timely fashion. The power of citizen education campaigns should also not be underestimated. A large-scale study conducted in India and the US showed that digital media literacy interventions can increase awareness of fake news.

It is therefore important that technology companies enact their pledge to watermark AI-generated content and share information on how AI technologies work in a transparent and accessible manner. Such information exchange is key to educating the public. This is essential not just to inform the wider public about the risks of AI and its impact on elections, but also to raise awareness of the negative consequences of non-adoption and the benefits AI can bring, and to explore how AI systems can become trustworthy to the wider public.

It is unlikely that AI will be as debilitating or damaging as the prevailing narratives we will see during the 2024 electoral cycle might suggest. That, of course, does not mean that the threats – if ignored – are not genuine. But AI can potentially provide as many solutions as it does problems, especially if we take a holistic approach grounded in evidence and education in confronting said concerns.

The views expressed in this Commentary are the authors’, and do not represent those of RUSI or any other institution.

Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.


WRITTEN BY

Sam Power

View profile

Dr Pia Hüsch

Research Fellow

Cyber

View profile

Emma De Angelis

Director, Special Projects

Publications

View profile


Footnotes


Explore our related content