The Need for a Strategic Approach to Disinformation and AI-Driven Threats
With the 2024 NATO Washington summit now concluded, the UK must address the significant threat posed by AI and disinformation to global security.
Recent changes in governments across NATO countries – with the potential for more in the near future – have unfolded as several countries commit to increased levels of defence spending. UK Prime Minister Keir Starmer went even further by declaring his government’s intention to reach a 2.5% commitment and announcing plans to publish a 2025 strategic defence review.Â
The shift in support of defence spending is driven in part by concerns that a potential second Trump presidency will lead to a US retreat from NATO countries that fail to meet the 2% GDP defence spending threshold. Additionally, the current threat of kinetic warfare on the European continent – unprecedented since the Second World War – has compelled many European countries to enhance their military capabilities and bolster their defence institutions more broadly. This includes investing in modern, more technology-enabled capability, boosting weapons production, and launching extensive armed forces recruitment programmes.
While working to support large-scale procurement programmes, threats in the information domain must neither be overlooked nor underestimated. This necessitates continuous efforts to counter information wars that authoritarian countries wage in the online space. In the absence of appropriate data and AI governance standards, the development of AI has exacerbated the problem of the creation and spread of disinformation, requiring NATO countries to seek ways to combat and counteract these threats in a way which builds societies that are resilient to disinformation and capable of critical thinking.
The AI Disinformation Threat
On 10 July 2024, NATO unveiled its updated AI strategy, designed to expedite the safe and responsible integration of AI technologies within the alliance. Notably, this revised strategy highlights the dangers of AI-enabled disinformation and information operations. It uses strong language to assert that this issue is now being addressed, and highlights the urgency of the challenge to the entire North Atlantic community.
By leveraging its diversity to develop inclusive AI systems that benefit all segments of society, the UK can ensure that innovation in AI reflects a broad range of perspectives and needs
While promoting a general level of awareness of the potential malicious usage of AI for disinformation, the document makes no reference to the ways in which AI could be used to benefit the North Atlantic community from the standpoint of strategic communication, or the potential usage of AI for combating disinformation.
The Current Threat of Disinformation and its Impact on Democracy
The risk that a lack of trust and faith in democratic institutions will worsen without established AI and data standards is significant. This mistrust means that the public may not embrace the opportunities that AI offers, potentially stifling a new era of innovation which is critical to the West's progress. Responsible AI development could drive forward and expedite the process of establishing new Western standards distinct from authoritarian efforts. This would help clear the significant ‘grey area’ which currently characterises the cyber domain and the use and application of AI in relatively ungoverned transboundary space. It would also help middle economies preserve and project the democratic principles that have shaped the transactionary rules and standards of the West to date. Embedding new ‘rules of the game’ in a multilateral world where adherence to a rules-based order is waning could support stability by reinforcing transparency and accountability, particularly for middle economies which become caught between the geopolitical rivalries of leading economies.
For diverse societies and middle economies like the UK, embracing AI and establishing robust standards for its use and application is crucial. By leveraging its diversity to develop inclusive AI systems that benefit all segments of society, the UK can ensure that innovation in AI reflects a broad range of perspectives and needs, fostering a more equitable and prosperous future. This approach would not only enhance domestic cohesion but also position countries like the UK as leaders on the global stage in responsible AI development.
On the other hand, ignoring the need for AI standards could exacerbate societal divisions. Russia's ‘Firehose of Falsehood‘ strategy, which relies on the mass dissemination of false information, poses a significant threat. Without proper AI governance, the current AI ecosystem is likely to only amplify this strategy, automating and accelerating the production of fake news. This risks deepening societal fragmentation and polarisation, trapping individuals in echo chambers and locking them in closed communities, while undermining government efforts to create strength and unity. It is therefore critical for countries like the UK to proactively develop and implement AI standards to safeguard their democratic institutions and social cohesion.
What Can be Done?
Based on the dual-usage nature of transformative technological innovations, the relative sparsity of cross-disciplinary knowledge and skillsets required to develop more resilient policies and legislation to support a data-driven and digitalised world, and the lack of societal awareness concerning disinformation, a multi-pillared approach is necessary to manage AI’s development and impacts. A three-pillar approach to managing disinformation - which involves technology, regulation and education - could help build national resilience through restoring faith in national institutions. While a degree of momentum has developed in each of these separate areas, more comprehensive progress across all three pillars is critical.
Technological InterventionÂ
The technological landscape offers a variety of solutions to address the challenge of disinformation and propaganda. These interventions leverage advanced algorithms, machine learning and AI to detect, flag and mitigate misleading content. Among the main tasks for scientists addressing these challenges is the use of state-of-the-art AI to identify the authenticity of information and its behaviour. Techniques such as Natural Language Processing (NLP), machine learning classifiers, graph-based techniques, and anomaly detection have been employed. NLP methods like sentiment analysis and entity recognition help identify suspicious text patterns, while machine learning classifiers categorise information as true or false by learning from labeled datasets. Graph-based techniques analyse information spread on social networks to detect disinformation campaigns, and anomaly detection algorithms flag deviations in data patterns that are indicative of falsehoods. Ongoing efforts also focus on developing tools to distinguish AI-generated content from human-generated text, with algorithms like ‘Grover’ demonstrating high accuracy. Browser extensions and applications like ‘NewsGuard’ and ‘Public Editor’ provide real-time credibility alerts, helping users identify reliable information sources. AI-powered chat moderators and bots, such as Stanford's ModBot, are deployed on social media to remove harmful content in real time, maintaining online discussion integrity. The quality of training datasets is crucial, as biased datasets can propagate inaccuracies. Initiatives like Media Bias and Fact Check curate reliable datasets for training AI models. Moreover, assessment methodologies, including counterfactual fairness and algorithmic auditing, ensure AI models' reliability and fairness by evaluating their performance across diverse contexts.
In the post-NATO Summit landscape, the impact of disinformation as a leading national security threat cannot be underestimated
Regulatory MeasuresÂ
Existing national legislative frameworks address disinformation by enforcing transparency, accountability and ethical standards, emphasising the requirement for AI systems to respect democratic processes and protect fundamental rights. However, these legislative frameworks require further evolution to tackle issues like liability and accountability for the malicious use of AI. The EU's AI Act, for instance, categorises AI systems by risk levels and mandates transparency for generative AI outputs. Similarly, the Council of Europe’s AI Treaty focuses on ethical and human rights dimensions, while the UN AI Advisory Board advocates for global standards to mitigate AI-driven disinformation. In the US and UK, national advisory bodies recommend robust measures against AI misuse and stress the importance of transparency and accountability. Despite these efforts, existing policies often lack enforceable penalties and concrete preventive measures, highlighting the need for stronger enforcement mechanisms, independent oversight, mandatory transparency, and public awareness campaigns.Â
Educational Efforts
One of the most effective ways to combat disinformation is through education, starting at a young age. Integrating media literacy and critical thinking courses into school curricula empowers students to recognise and resist disinformation by evaluating source credibility, understanding disinformation techniques, and developing scepticism towards unsupported information. Technology can support this effort with interactive apps that use gamification to teach about disinformation. Additionally, student clubs focused on combating disinformation can provide platforms for discussing current events, analysing news sources and sharing strategies. Teaching students how to create disinformation to understand its techniques and motivations could further assist in fostering vigilance. The overall aim would be to promote cautious engagement with online content, helping students grow into informed adults capable of navigating the complex information landscape and thereby contributing to a more discerning society. In addition to engaging with younger age groups, higher education must also focus on evolving curricula that reflect the realities of a profoundly transformed technology-enabled world, as well as international security models that reflect the cyber and space domains in addition to land, sea and air.Â
Conclusion
In the post-NATO Summit landscape, characterised by changes in national security policies and a renewed support for collective security and enhanced defence spending, the impact of disinformation as a leading national security threat cannot be underestimated. In efforts to ensure that middle economies like the UK become resilient against disinformation’s polarising impacts, this article proposes a three-pillared approach to addressing disinformation in a way which could help capitalise on innovation opportunities, bolster the strength of national institutions, and protect the democratic pillars on which the resilience of middle economies depends. This three-pillared approach recognises the need for new technologies that can be used to combat disinformation in order to uphold democratic and ethical standards in the form of regulatory frameworks that are fit-for-purpose in a data-driven and digtalised world. In addition, promoting awareness of disinformation at all levels of education would help remove the ‘mystique’ associated with this phenomenon, and not leave decision-making on disinformation primarily to the domain of intelligence bodies.
The views expressed in this Commentary are the authors’, and do not represent those of RUSI or any other institution.
Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.
WRITTEN BY
Professor Ann M. Fitz-Gerald
Senior Associate Fellow
Halyna Padalko
- Jack BellMedia Relations Manager+44 (0)7917 373 069JackB@rusi.org