How China and the UK are Seeking to Shape the Global AI Discourse


Not to be left out: China's Baidu has recently launched the ERNIE chatbot as an alternative to ChatGPT. Image: Rafael Henrique / Adobe Stock


While China has recently launched a ChatGPT alternative and become the first country to regulate generative AI, the UK is struggling to find a united voice ahead of its AI safety summit in November. Although both countries are determined to be leaders in AI technologies, the UK government has now confirmed that China will be invited to the summit to cooperate on the national security threats posed by AI.

High expectations about China’s technological prowess meant that technology experts and AI enthusiasts looked on with interest as Beijing granted regulatory approval for the public rollout of a ChatGPT alternative, the ERNIE 3.5 chatbot, in late August.

In March, the Chinese tech giant Baidu launched ERNIE’s highly anticipated predecessor, the 3.0 version, to much fanfare. While this early version revealed the shortcomings of a less sophisticated technology that was perhaps too immature to be released, the August launch showed improvements, despite significant differences with its US competitor: Chinese large language models (LLMs) are thought to be two to three years behind the state-of-the-art equivalent in the US. ChatGPT unavailability to Chinese consumers has created an appetite for homegrown alternatives. Far from being the only player, Baidu is one of many companies in China that have released LLMs in an attempt to jump on the ChatGPT bandwagon.

Technology as an Enabler of China’s Geopolitical Ambitions

Having identified technology as a source of economic and military power, China under Xi Jinping has made no mystery of its intentions to scale up key strategic industries and become a world-class tech power. China’s 2017 AI development plan clearly states its intention to become ‘the world’s main artificial intelligence innovation centre’ by 2030. Beijing has nurtured world-leading companies in AI, like those working on computer vision and its applications, which are instrumental to its ever-expanding surveillance state. With generative AI, the flurry of LLM rollouts shows that China wants to be anything but a passive observer, seeking to seize the political and commercial benefits of this field as well.

But this AI frenzy is not limited to technological innovation: it also involves crucial steps on governance. While multiple calls for regulations and control of proliferating AI technologies make headlines in the West, China has gained a head-start by becoming the first country in the world to regulate generative AI this summer.

quote
Becoming an AI superpower remains an ambitious goal, especially considering the current state of the UK AI policy landscape

Given that the Chinese Communist Party’s (CCP) political security rests on its ability to control and manipulate the domestic information space, it is hardly surprising that China is pushing to regulate AI-generated content. Legislative requirements impose security assessments for algorithms with ‘public opinion or social mobilisation attributes’ and require AI-generated content to ‘adhere to core socialist values’ and ‘not incite the subversion of state power’. Such obligations are apparent in the censorship capabilities of Chinese-developed chatbots: in its 3.5 version, ERNIE eschews political questions that are deemed too sensitive – albeit leaving room for peculiar foreign policy takes.

Idiosyncratic provisions aside, China’s finalised AI regulations also mandate companies to take effective measures to prevent discrimination, to protect intellectual property rights and to guarantee the privacy of personal information, echoing concerns about structural racism, copyright infringements and the massive collection and storage of personal data that have been voiced in the West as well.

China’s first-mover advantage in generative AI means that Chinese companies can fine-tune their LLMs to what the CCP considers China’s national priorities from the initial stages of technology development. These include both mitigating potential societal and political fallout, as well as signalling policy direction to develop AI applications that can serve the real economy, such as in the medical industry, the transport sector or the mining industry.

Much Ado About the AI Summit

Meanwhile, the UK is equally keen on playing a key role in the AI landscape. In its National AI Strategy, the UK government has set itself the aim of becoming a ‘global AI superpower’. The UK hosts several promising AI technology companies and research institutes, and seeks to further strengthen its AI capabilities with the help of the Frontier AI Taskforce. Becoming an AI superpower, however, remains an ambitious goal, especially considering the current state of the UK AI policy landscape.

quote
Having the best intentions for future action is insufficient when it comes to technologies that develop as quickly – and that are of as much geopolitical and strategic importance – as AI

After several reshuffles of both competencies and staff, the newly founded Department of Science, Innovation and Technology – together with teams from Number 10 and the Foreign, Commonwealth and Development Office – is now putting together the world’s first AI summit on safety. With the summit scheduled for 1–2 November this year, those in charge of securing an impressive guest list and an agenda that can meet the high expectations set out are under enormous time pressure. It is no surprise that public confidence in the summit is low, especially as it has taken the government months to even announce a date and location. After weeks of speculation, China has been officially invited to the summit in order to address mutual concerns over AI’s risks to national security. However, the extent to which Beijing will be able to participate remains unclear.

Described by the prime minister as ‘the first major global summit on AI safety’, the summit was announced in June. At the time, the EU and the US announced an alliance on an AI code of conduct as part of the EU-US Trade and Technology Council. The UK is not a member of this council, and although it is part of G7 conversations on AI regulation, the announcement of an AI summit came at a strategically convenient but surprising time. Arguably, the additional UK AI summit is a duplication of other international efforts, but it is one way for the UK government to counter any narratives implying that the UK might be a bystander on AI standard-setting discussions.

The problem is that having the best intentions for future action is insufficient when it comes to technologies that develop as quickly – and that are of as much geopolitical and strategic importance – as AI. The AI White Paper that the UK published a few months ago, for example, seems to have almost been forgotten by many. Meanwhile, the Science, Innovation and Technology Committee’s findings on the governance of AI, published at the end of August, suggest that the White Paper’s five principles of governance constitute a ‘proposed approach (that) is already risking falling behind the pace of development of AI’ – a risk that is only increasing given the previously mentioned EU and US efforts to set international standards.

Whether in the form of a white paper, a summit or other projects, thought leadership contained in forgotten documents or photos of ministers shaking hands with big tech CEOs will mean little if they are not followed up with coherent and persistent action. The challenges the UK government faces when it comes to positioning itself as a ‘global AI superpower’ are not just posed by Chinese ambitions for tech supremacy. The UK government also needs to unite different departments on its AI policy and priorities, and must continue to forge alliances with the private sector and international partners. As AI industries across countries are deeply entangled, finding common ground with jurisdictions that do not share the UK’s values is also crucial. Only then can the UK – together with its allies – be actually world-leading on AI technologies.

The views expressed in this Commentary are the author’s, and do not represent those of RUSI or any other institution.

Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.


WRITTEN BY

Ludovica Meacci

Associate Fellow

View profile

Dr Pia Hüsch

Research Fellow

Cyber

View profile


Footnotes


Explore our related content