Countdown to the AI Safety Summit: Six Expert Views on What to Expect
Six experts weigh in on the UK's first AI Safety Summit. Will the summit deliver on the UK government’s ambitions, or is it set to disappoint? What is the role of China and civil society? And what can we learn from the defence and healthcare sector when it comes to AI safety?
The UK government announced in June that it would hold the first global AI Safety Summit this year. Public confidence in the summit seemed to decrease as it has taken the government months to announce the date and location. After much speculation and rumour, more and more details are now emerging on what the summit will entail. With the UK government’s aim to become a global AI superpower, the AI summit is a critical opportunity for the UK to shape the global AI discourse. Will the summit deliver on the government’s ambitions or is it set to fail? What good can come from the summit, and what are its potential shortfalls? Several experts from different areas of AI policy, innovation and research explore what they expect from the AI safety summit and what areas they think it should address.
The Challenges of Regulation
Pia Hüsch
The AI summit will not be able to avoid the topic of AI regulation. With the increased public attention to the opportunities but especially the risks of AI, there has also been an increased demand of governments to react to such potential threats.
Enacting AI regulation is not a black and white decision, but must account for many factors – legal, technical and economic, to name just a few. As such, decisions on forms and degrees of regulation always involve a complex interplay of sensitive factors such as national security considerations that will be discussed at the summit.
While the AI summit will not be able to provide determinative answers to the broad challenges of AI regulation, it is nevertheless an opportunity for the UK to clarify its position on AI regulation and to seek alliances on AI regulation. Next to diplomatic efforts, improving dialogue and partnerships across all participants, publishing a common commitment or consensus statement from participants would be a good start. Such a statement could clarify key next steps or identify where there is agreement on common challenges and priorities. Similarly, the AI summit could result in the announcement of a multi-stakeholder effort to formalise dialogues on AI safety: for example, through an international institution, which would help to ensure a regular dialogue among key stakeholders, or by turning the AI safety summit into an annual event. While the establishment of an international institution seems less likely, identifying modes of future cooperation including on AI regulation will be a key task of the summit.
China’s Role at the AI Summit
Huw Roberts
The decision to invite China to the UK’s AI Summit has proven controversial. Detractors, including Japan, have argued that the summit should focus on building consensus between ‘like-minded countries’ and that it is too early to involve China in discussions. Supporters consider it essential to have one of the world’s most important AI powers present for making progress towards stronger global AI governance. I am firmly in the latter camp.
AI is a general-purpose technology that can be applied across the whole economy for different tasks. Accordingly, strengthening global AI governance will not involve the creation of a single all-encompassing treaty or international body, but will instead require piecemeal agreements focused on different harms, technologies and sectors. ‘Like-minded countries’ are already progressing this type of governance in forums which exclude China, such as the G7, the Council of Europe and the Global Partnership on AI.
For so-called ‘frontier AI’ – systems that severely threaten public safety and global security – that the summit focuses on, the risks of shunning China are too great. Without China’s buy-in, any measures designed to strengthen AI safety will prove ineffective. Consider advanced AI systems improving access to information that could pose biosecurity threats, an example often drawn upon by organisers of the summit. If China is not involved in agreeing on measures designed to mitigate this risk, then there is scope for Chinese systems being used to access this information.
An urgent priority is the establishment of treaty regimes restricting military and other applications of AI technology that pose a threat to life on a massive scale
The summit provides a rare opportunity to engage with China and draw on its expertise in mitigating risks from cutting-edge AI. Organisers should be cognisant of this as final decisions are made on whether to involve China in central discussions or leave it largely sidelined.
Ethics over Economics
John Tasioulas
Economic growth must be dethroned from its position as the animating objective of AI policy. Ethics, often mistakenly construed as only a series of restrictions on the pursuit of growth or as merely ‘soft’ regulation, should be elevated in its place. This means policy self-consciously driven by the over-arching goal of enabling us, as individuals and communities, to flourish. Ensuring that AI-based technology is developed in a just and beneficial way requires a diversity of measures, including the following three. At the upcoming AI Safety Summit in early November, an urgent priority is the establishment of treaty regimes restricting military and other applications of AI technology that pose a threat to life on a massive scale. Engaging pragmatically with states that are not liberal democracies, notably China, is essential.
We must also prioritise developing AI technology that promotes human welfare rather than merely cuts costs or facilitates surveillance. Central here is using AI tools to enhance citizen participation in political processes, a matter on which much can be learnt from Taiwan’s inspiring experiment in digital democracy. Finally, we need more creative thought about the social obligations of tech corporations, rejecting the Friedmanite dogma that their only responsibility is to maximise profit yet without lapsing into the opposite error of allowing them to usurp governmental functions.
The Ghost at the Feast: Civil Society is Missing
Stephanie Hare
Civil society is noticeably absent from the AI Summit guest list. While the summit will be preceded by four official pre-summit events with the Royal Society, the British Academy, techUK and The Alan Turing Institute, this cannot replace the expertise and perspectives of labour unions, human rights or civil liberties groups, children’s rights organisations, environmental and climate groups, as well as areas where we know some of the greatest harms in bias and discrimination are already occurring, such as policing, healthcare, education, financial services and the creative industry. Such voices were included in the first US Senate-led AI Insights Forum and are leading some of the most vocal and effective protests. They have contributed to drafting of the EU AI Act, the landmark legislation which is expected to receive final approval in December.
So far, UK civil society is expected to make do with two pre-summit online events, one on X (formerly known as Twitter) and another on LinkedIn. But they will not have a seat at the table.
This is a mistake and a wasted opportunity. AI is already changing the way that people work and live. To maximise its benefits and minimise its harms, we need a shared understanding of our citizens’ views and experiences, needs and desires, hopes and fears – with community buy-in and co-creation. In short, we need to upgrade the social contract. I hope that it is not too late for the government to use some of its £100 million budget to open the door and pull up a few more chairs around the table.
Time to Share – One of the Times That Defence is Ahead?
Patrick Hinton
The exact structure and invitee list of the UK’s AI Safety Summit is not yet clear, but it presents a potentially significant opportunity for broadening understanding of the risks of AI proliferation. The importance of explicable and robust AI is widely recognised within Defence circles. Indeed, the UK’s Defence Artificial Intelligence Strategy, published in June 2022, embraces the mantra ‘Ambitious, safe, responsible’, with the importance of people emphasised throughout.
Healthcare professionals must understand how predictions and classifications are made, while patients must be represented in algorithms used to treat them
As AI technology is integrated into more processes in both the military and civilian sectors, the relationship between the two, which has often been problematic, becomes even more crucial. The importance of the interplay between the British Ministry of Defence and industry is demonstrated in the Land Industrial Strategy (LIS), published in 2022. The proliferation of AI, which is expressly mentioned in the LIS, presents an opportunity for military forces to share best practice and norms surrounding accountability and organisational structure that civilian partners are still getting to grips with.
Perhaps counterintuitively to the outside observer, safety is absolutely paramount within military forces. Risk management is ingrained in the military canon and sits front and centre in decision-making. In the UK, the Defence Safety Authority works to protect personnel and operational effectiveness through regulation, assurance, enforcement and investigation. As the summit approaches, the defence sector should be prepared to show off what it has learned on public–private cooperation and how it can contribute to the safe integration into AI in wider society.
Regulating AI: Lessons from the Healthcare Experience
Michael Boniface
The AI Safety Summit is a significant opportunity for collective action to ensure that our global healthcare systems remain safe, secure, effective and accepted by society. Healthcare systems are under pressure across the globe. Developed states have ageing societies which are living longer with more complex needs, while Low and Middle-Income Countries are seeking ways to use digital technologies to address major healthcare system challenges, such as responding to infectious-disease threats and strengthening primary care. AI and the autonomous systems that they create – systems within minimal human supervision – offer huge potential to address the challenges and offer solutions to improved outcomes, reduced service costs and support medical discovery.
Medicine is exemplary in its approach to safety. AI is no different in that respect: a software algorithm leading to medical devices and wellbeing services that are tested through trials with robust protocols. Success criteria bring together safety with efficacy and user acceptance. The cautious approach taken by regulators such as the US Food and Drug Administration and UK Medicines and Healthcare products Regulatory Agency highlights why AI adoption in healthcare is and is likely to continue to proceed slowly and with care.
Of course, complex AI systems raise risks of interpretability, accountability, data quality, data bias and public acceptance. Healthcare professionals must understand how predictions and classifications are made, while patients must be represented in algorithms used to treat them. In addition, the adaptive nature of AI systems – regular retraining of algorithms as new data emerges – requires a more agile approach to governance and certification of medical devices.
Healthcare can offer a model approach to safety. Regulators working in the sector are making great progress in understanding AI safety risks across a spectrum of autonomy and situations. My hope is that the AI Safety Summit will draw lessons from healthcare and other highly regulated industries where human safety is of paramount importance.
The views expressed in this Commentary are the authors’, and do not represent those of RUSI or any other institution.
Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.
WRITTEN BY
Dr Pia Hüsch
Research Fellow
Cyber
Huw Roberts
Associate Fellow
John Tasioulas
Stephanie Hare
Major Patrick Hinton
Former Chief of the General Staff’s Visiting Fellow
Military Sciences
Michael Boniface
- Jack BellMedia Relations Manager+44 (0)7917 373 069JackB@rusi.org