It’s the Algorithm, Stupid: Influence in the Age of Generative AI
The rise of Chinese-made DeepSeek and the rapid spread of AI chatbots raise pressing questions about their implications for democracy and security. Given their superhuman ability to learn and influence, there is an urgent need to strengthen user literacy.
Generative AI tools based on Large Language Models (LLMs) are becoming embedded in many aspects of daily life, from assisting professionals with legal research to helping shoppers with their weekly groceries and enabling learning about Socrates through chatbots.
Recent events have highlighted emerging risks to security and democratic stability. The use of ChatGPT to assist in planning an attack on President Donald Trump’s Las Vegas hotel demonstrates the grey areas in which AI can be exploited to catalyse real-world harm. Similarly, concerns over DeepSeek, a Chinese AI system that self-censors in real time and stores user data beyond democratic oversight, underscore wider implications for national security.
The hallmark of this technology is its low user cost and its ability to rapidly converse fluently and confidently like a human. For instance, it can persuasively argue that AI poses no threat to humanity. As well as answering questions, these systems can interrogate documents, write code, generate video, and carry out basic online tasks.
Allowing the technology’s creative applications to develop freely and making it widely available has been key to attracting staggering levels of investment in the sector. Today, AI-powered tools have hundreds of millions of weekly users. However, this widespread adoption is also where the risks lie.
Alongside cyber security vulnerabilities intrinsic to LLM technology, concerns have been raised about its potential to flood democracies with disinformation and propaganda, as well as providing a sophisticated aid to extremist recruitment. From the standpoint of communications practice, the risks are more complex.
When data is stored in foreign jurisdictions, this raises the possibility of hostile actors extracting valuable population insights from users' interactions with AI, as individuals often unwittingly reveal their attitudes and behaviours in relation to their preferences. In this sense, AI users everywhere are unknowingly training an unsleeping, ever-evolving algorithm.
The Double-Edged Nature of Generative AI
While much attention is being paid to the technological and economic race, the long-term societal impact of AI’s persuasive power remains largely overlooked. Recent research studies demonstrate that generative AI excels at moral reasoning, narrative creativity and psychological profiling and even anticipates emotional states – all achieved automatically. The potential impact of these superhuman qualities is not widely understood.
AI-driven chatbots and their API-integrated systems can seamlessly analyse audiences, generate targeted content, and distribute it efficiently
National security concerns have focused on scenarios like AI exposing sensitive chemical, biological, radiological, and nuclear information or spiralling out of control. Europol has warned that LLMs could scale up and accelerate cyber attacks. However, the erosion of trust and the potential for large-scale behavioural manipulation may be equally disruptive. Leaked documents revealed that AI researchers highlighted how their technology could lower the cost of disinformation campaigns.
The strength of AI lies in its adaptability to user demands and its capacity to adopt their preferred personae. Generative AI companies make their application programming interfaces (APIs) widely available so that AI can be incorporated into all sorts of software, search engines and social media services, allowing these systems to be fine-tuned for specific purposes. With few restrictions and trained on historical data going back thousands of years, AI can synthesise – or distort – a startling array of ideas, facts and narratives at an unprecedented speed.
AI is constantly improving. Almost a decade ago, DeepMind’s AlphaGo shocked the world by defeating human champions and demonstrating inexplicable strategic creativity. Today’s generative AI matches or outperforms humans in many creative disciplines. Its ability to recreate reality, without the biases of conscience or ego, presents both a remarkable capability and a potential threat.
When disinformation watchdog Newsguard presented ChatGPT with narratives already proven to be false, the AI doubled down and eloquently defended 80% of debunked claims. Though there were examples of effective moderation, researchers indicate that the results were similar to those advanced by the worst conspiracy influencers or Chinese and Russian government bots.
The Rise of Mass, Personalised Influence
One of the most profound changes brought by generative AI is its ability to anticipate user intentions.
Researchers at Cambridge University have found that, with just a small number of chatbot interactions, AI can accurately predict a user’s future intentions. This shift from an ‘attention economy’, in which algorithms sought to capture users' engagement, to what they call an ‘intention economy’, where AI can anticipate and exploit desires and decisions before they are even consciously formed, represents a fundamental transformation.
What does this mean for disinformation? A series of experimental studies demonstrated the potential of LLMs to automate, and therefore scale up, personalised persuasion. Personalised messages generated by ChatGPT in the areas of marketing, political ideology, and personality traits proved more influential that non-personalised messages. AI-generated content has been found to be particularly persuasive, as it can read and respond to individual psychological traits. The application of the technology, in which consumers and citizens will have their decisions second-guessed and nudged, has the alarming potential to increase distrust.
Generative AI is far from perfect, but the risk of automated, mass manipulation is real. AI-driven chatbots and their API-integrated systems can seamlessly analyse audiences, generate targeted content, and distribute it efficiently – collapsing a series of tasks previously shared out between communications firms and marketing professionals. With the right expertise and application to social media, for instance, these capabilities could be exploited for radicalisation efforts.
AI developers are building in safeguards so that obviously harmful prompts go unanswered. However, one recent study demonstrated that chatbots remain vulnerable to manipulation.
Using a technique known as ‘jailbreaking’, researchers were able to bypass moderation controls in half of the cases tested, allowing chatbots to help the researchers play out scenarios mirroring likely violent extremist activities, including creating divisive content or misinformation, and even providing guidance on recruitment and attack strategies. Even a chatbot’s moderation control preventing it from analysing individual social media feeds can be overcome with a little persistence.
Militant groups, such as Islamic State, have reportedly explored AI's potential for their own purposes. However, the true disruptive power of AI may not come from state actors or extremist groups, but rather from the sheer volume of artificially generated narratives, amplifying confusion and further eroding trust in reliable sources of information.
The popularity of companion chatbots illustrates another dimension of AI’s persuasive power: its apparent ability to empathise with human users.
In a study comparing human emotional awareness to that of ChatGPT, the AI system outperformed human participants. Because generative AI can adapt to an individual’s emotional and psychological state, researchers found a ‘dangerous potential to further enmesh people in their own idiosyncratic worlds’.
This has implications for recruitment into violence. Following an inquiry into the Stockport attack, UK Prime Minister Keir Starmer recently warned that ‘loners and misfits’ face growing risks of radicalisation through an array of content encountered online.
While today’s AI does not yet possess general reasoning capabilities, the pace of its evolution indicates that societies must adapt now
Similarly, LLMs outperform humans in instances of moral reasoning. AI-generated messages can appear virtuous and trustworthy, provided users are unaware that the source is artificial. Consequently, in a democracy, those with malign intent are vulnerable to exposure if their use of such technology can be demonstrated, though this will be difficult in practice. Certainly, there was ideological pushback among supporters of an Islamic State affiliate when a sympathetic media channel adopted AI-generated newsreaders.
This is why the marketing and communications industry is not simply promoting itself in talking down generative AI; in many cases, technology needs a human face to be believed. Many AI companies are looking to strengthen user empowerment and autonomy through technical design. But responses need to go further to maintain human control of the technology, requiring a focus on educating the user so they can harness the benefits while avoiding the pitfalls.
Strengthening Democratic Resilience
Lessons from the rise and misuse of social media suggest that societies must act swiftly. While today’s AI does not yet possess general reasoning capabilities, the pace of its evolution indicates that societies must adapt now.
Discussions often focus on existential threats posed by AI, but the more immediate, practical challenges of generative AI are frequently overlooked. The safeguards and guardrails being designed into LLMs are decisive, but they are little understood and barely discussed outside a limited circle of experts. Without more democratic discussion, generative AI’s emerging role in shaping public discourse will remain opaque. Few people understand how these systems operate, and this must change.
The best aspects of social media lie in its ability to redistribute democratic power. Generative AI is more complex and necessitates sustained discussion and attention to defend democracies. Some governments have started to do this. In an era of disruption, societies and individuals equipped with knowledge and resources will be better positioned to navigate the coming waves of informational and economic transformation.
Simply exposing disinformation or identifying dangerous influencers will not be enough. To maintain democratic control, users must be empowered as citizens. AI literacy must extend beyond technical proficiency to equipping individuals, as part of society, to navigate the complexities of AI-generated influence and uphold democratic values.
Looking back, regulating social media could have been handled far better. Failing to address AI’s influence today could be a far graver mistake.
© Matt Freear, 2025, published by RUSI with permission of the author
The views expressed in this Commentary are the author’s, and do not represent those of RUSI or any other institution.
For terms of use, see Website Ts&Cs of Use.
Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.
WRITTEN BY
Matt Freear
Associate Fellow
- Contact:RUSI Press Office+44 (0)7917 373 069RusiP@rusi.org