The Paris AI Summit pivoted to boosterism instead of reckoning with the gravity of a future containing artificial general intelligence, but the risks and uncertainty remain.
The Paris AI Action summit was meant to continue the work begun at Bletchley on building global consensus around the safety and security challenges of frontier AI technologies. Instead, it revealed deep fractures in international AI governance.
This could hardly have come at a worse time, as the leaders of frontier companies warn that artificial general intelligence may finally be in sight, and as the necessary cooperation between leading nations such as the USA and China looks more distant than ever.
But while the Summit series may no longer play the role envisaged at Bletchley, there are still paths to avoiding a dangerous race to the bottom. A more targeted governance approach focused on security risks may prove more practical, and there are important bridging roles for states to play in building consensus between the major powers.
Paris was the latest in a summit series begun in 2023 by the UK Government with the Bletchley AI Safety Summit. Bletchley recognised the growing capabilities of frontier AI models, and the scale of the risks they might pose over time - from lowering barriers to creating cyberattack and bioweapon capabilities to the challenges of maintaining control of increasingly capable and autonomous systems.
Bletchley was unique in taking seriously, at least implicitly, the idea that the leading frontier AI companies might be successful in their stated aim: developing artificial general intelligence, or AGI. This achievement would bring about a near-unfathomable level of civilisational transformation, and in many experts’ assessment would pose catastrophic or even existential threats to humanity. With its explicit focus on advanced capabilities and extreme risks, Bletchley complemented work ongoing on a much wider set of societal and benefit-sharing issues around AI, taking place at international fora such as the United Nations, the Global Partnership on AI, the OECD, the G20 and elsewhere.
The Summit series kicked off by Bletchley had a number of clear ambitions:
- To place the science of frontier AI risk on a more rigorous footing and to build knowledge and consensus around the risks, which it did through the establishment of a panel to produce an International AI Safety Report.
- To provide a venue for leading companies to present and improve on their own frameworks for ensuring the safety and security of their models.
- To build towards international cooperation on risks that, by their nature, would be likely to cross national boundaries and geopolitical divides.
The Seoul Interim AI Safety Summit in May 2024 progressed these ambitions, with an Interim Safety Report presented alongside safety frameworks from a broader set of AI companies. Then matters took an unexpected turn.
On being awarded the next full Summit, France rebranded it as an AI Action Summit with an expanded focus on innovation and economic opportunities. President Macron announced €109 billion in private investment for AI in France. EU President von der Leyen followed by announcing a €200 billion investment fund for European AI.
Discussions of risk were not so much deprioritised as excised. Presentations on emerging security threats and company safety frameworks took place offsite, hastily organised by participating organisations in hotels around Paris. Discussion of the newly released International AI Safety report were relegated to a back room. From the opening address, delegates were told existential risk could be dismissed as science fiction; yet discussion of current harms, requiring no flights of fancy, were also notably muted.
The CEO of Anthropic has written that as early as 2026, and ‘almost certainly’ by 2030, we should expect to have AI systems better than all humans at nearly all cognitive tasks
Even by the first morning, the proverbial was beginning to hit the fan. The talk of the town was the leaked statement the Paris organisers had intended to release: a vague, fluffy document bereft of firm commitments and concrete actions, making only the most cursory allusion to the Bletchley aims. Later that day the Times would report the US had stipulated the statement was not to reference existential risk, the environmental impacts of AI, or the United Nations.
By day two, any hope of consensus had evaporated. ‘I am not here to talk about AI safety,’ stated JD Vance, before delivering a broadside against countries that sought to impose burdensome regulations. Accelerating AI was the priority, and the US was to be the partner of choice.
The UK refused to sign the statement, stating that it ‘didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security.’ The US also refused to sign, widely speculated to be in part due to language around diversity and inclusivity that ran counter to the current administration's priorities. By the end of the day, multiple other countries that had been involved in the Bletchley process had refused to sign. The rest of us were left wondering: where do we go from here?
What Went Wrong
The AI safety community came to Paris expecting that even in a shift of priorities, the importance of the process Bletchley started would be respected. They were wrong.
Many of us, myself included, underestimated the speed and scale of realignment of political priorities taking place. Paris was about signalling commitment to accelerating AI and adapting to post-election political realities. The appetite for significant regulation appears to have waned.
What’s Coming Next
But behind the bluster and political positioning, a more profound realignment is taking place. If the leaders of the world’s top frontier AI companies are to be believed, their goal of AGI is coming into sight. According to the New York Times, the President of the USA has been told by the CEO of OpenAI that artificial general intelligence will be developed within his Presidency. The CEO of Google DeepMind predicts a 50/50 chance of Einstein-level intellectual abilities by 2030. The CEO of Anthropic has written that as early as 2026, and ‘almost certainly’ by 2030, we should expect to have AI systems better than all humans at nearly all cognitive tasks.
Even among experts previously dismissive of such Silicon Valley claims, scepticism is weakening as increasingly challenging benchmarks fall in rapid succession – the last three months have seen astonishing performances on maths, coding and visual reasoning challenges designed to be difficult for these systems. The level of investment would seem to be evidence that those in the know expect something very big indeed: these last months have seen hundreds of billions of dollars in funding committed to expanding frontier AI infrastructure in the US alone. We don’t have to take the company CEOs’ claims as a proven roadmap for the future, but we would be reckless not to take seriously the possibility that they will do exactly what they say they are going to do.
It is difficult to envisage what the role of humans is in a world with AGI. At best, we may need to get to grips with no longer being the driving force of scientific, economic, or even most intellectual progress. But as the models grow in capabilities, their risks will scale, and the transition to such a world will be fraught with danger.
Where We Go From Here
The next Summit will be in India. It will be tempting to try to steer it back to include Bletchley’s goals, or to find some way to centre AI acceleration alongside extreme AI risk, alongside issues of diversity, inclusivity and addressing societal harms. This would be a mistake. It would result in a confused overlap between issues that are better understood separately, would likely exacerbate tensions between communities committed to these different priorities, and would at best result in watered down outputs that don’t represent the concrete progress that is needed on any of these.
Instead, progress must continue on addressing safety and security risks through their own dedicated processes. They require their own targeted cooperation channels, and their own targeted governance interventions. They require close integration with technical experts and security experts. They are in some sense less threatening to the US policy vision put forward by Vance, as they do not require such broad and potentially burdensome regulatory interventions across society. Instead, a tighter focus on the most advanced models and their development can be maintained. Any international cooperation realistically needs US buy-in, and for better or worse, that buy-in should be more easily achieved with a narrower focus.
At time of writing, the UK and USA have the strongest technical expertise relevant to major AI security challenges in the form of their AI Security and Safety Institutes, academic institutes and security think tanks. They also host a majority of the world’s leading AGI-focused companies. There is important work for them to do in collaboration to test, evaluate, and ‘red-team’ frontier models prior to release, and to support the frontier companies in strengthening their safety and security frameworks. These collaborations are ongoing, and once better developed, can be expanded to include other allied nations in leading positions in frontier AI, perhaps facilitated through the Network of AI Safety Institutes currently Chaired by the USA.
In the East, China is making rapid strides in frontier AI as demonstrated by the success of DeepSeek. It has also established an AI Safety Association, encompassing several AI safety testing and evaluation hubs in Beijing and Shanghai. Their experts were present in Paris calling for cooperation on mitigating AI risk. The current geopolitical climate will make such cooperation tricky to achieve. However, nations with a sophisticated technology policy ecosystem and good relations with both China and the West may be able to play a bridging role in developing a common understanding of risks, and interventions that benefit all. Singapore, has made significant contributions in the past through focused international interventions, for example in playing a bridging role between Western and Eastern powers on cybersecurity, although playing this delicate role may be even more difficult in the current environment. Both France and Britain are to be commended for bringing the US and China to discussions at their respective summits, and there is hope that this groundwork can be built on.
A major international AI security incident with a powerful future generation of model may raise the salience of AI risks both nationally and internationally
In the East, China is making rapid strides in frontier AI as demonstrated by the success of DeepSeek. It has also established an AI Safety Association, encompassing several AI safety testing and evaluation hubs in Beijing and Shanghai. Their experts were present in Paris calling for cooperation on mitigating AI risk. The current geopolitical climate will make such cooperation tricky to achieve. However, nations with a sophisticated technology policy ecosystem and good relations with both China and the West may be able to play a bridging role in developing a common understanding of risks, and interventions that benefit all. Singapore, has made significant contributions in the past through focused international interventions, for example in playing a bridging role between Western and Eastern powers on cybersecurity, although playing this delicate role may be even more difficult in the current environment. Both France and Britain are to be commended for bringing the US and China to discussions at their respective summits, and there is hope that this groundwork can be built on.
However it happens, stepping stones towards shared agreements on safety and security across all AI-leading nations need to be established. Such agreements may seem near impossible at the moment, but impossible can change more quickly than we think. A major international AI security incident with a powerful future generation of model may raise the salience of AI risks both nationally and internationally. No leading nation welcomes the prospect of powerful models that could be co-opted by non-state actors to disrupt critical infrastructure or enable wide-scale fraud attacks. The asymmetry of the situation – China, the US and the UK all have significantly more digital surface area to protect than any non-state actor or criminal organisation – suggests that cooperation might begin in these domains even where significant competitive pressures and economic divergences exist elsewhere. Perhaps we might even see sharing of insights and best practices for making models secure and robust to attack and co-option. This in turn might form a starting point for global standards, even alongside a continuing technological competition in frontier AI.
Ultimately all leading nations need to confront the prospect of AGI being an imminent possibility, and the risks this entails. As the UK’s Technology Secretary Peter Kyle put it: ‘Losing oversight and control of advanced AI systems, particularly Artificial General Intelligence, would be catastrophic. It must be avoided at all costs’.
The most direct path to an AGI loss of control outcome is one in which the world’s AI-leading nations are locked in a race to the bottom. Governments must ensure that a drive to lead towards AGI does not result in the necessary investment in safety and security being neglected. They must ensure there are avenues to share expertise and best practices on safety and agree on red lines to avoid catastrophic outcomes.
However much we may disagree, there are some areas where it will always be in our best interests to cooperate. The stakes may soon be too high not to.
© Seán Ó hÉigeartaigh, 2025, published by RUSI with permission of the author.
The views expressed in this Commentary are the author's, and do not represent those of RUSI or any other institution.
For terms of use, see Website Ts&Cs of Use.
Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. View full guidelines for contributors.
WRITTEN BY
Dr Seán Ó hÉigeartaigh
- Jim McLeanMedia Relations Manager+44 (0)7917 373 069JimMc@rusi.org