UK and US announce landmark partnership on AI Safety

On 1 April 2024, the UK and US signed a trailblazing agreement on artificial intelligence (“AI”), making them the first countries in the world to formally collaborate on AI safety.

The agreement, signed by Secretary of State for Science, Innovation and Technology, Michelle Donelan, and US Secretary of Commerce, Gina Raimondo, will see the two countries working together to test and assess the risks of the most advanced AI models. The agreement follows through on the commitments made at the AI Safety Summit in November where governments of 29 countries, industry leaders like Google, and the United Nations and Council of Europe made commitments to mitigate the risks that advance AI poses.

Under the agreement, the UK and US intend to perform at least one joint testing exercise on a publicly accessible model. They also intend to utilise a collective pool of expertise through secondments of researchers between the UK’s new AI Safety Institute (AISI) and the US Safety Institute. Following the rapid development of ChatGPT in 2022, the agreement aims to encourage the sharing of information and knowledge to better understand the risks posed by AI systems and to minimise surprise to the UK and US from unexpected advances in AI.

Raimondo insisted that “this partnership makes clear that we aren’t running away from these concerns – we’re running at them. Because of our collaboration, our institutes will gain a better understanding of AI systems, conduct more robust evaluations and issue more rigorous guidance”.

Similarly, Donelan noted, “AI does not respect geographical boundaries. We are going to have to work internationally on this agenda and collaborate and share information and share expertise if we are going to really make sure that this is a force for good for mankind”.

The partnership will take effect immediately.

UK AI Safety Institute (AISI)

The AISI was established by Prime Minister, Rishi Sunak ahead of last year’s AI Safety Summit. The Institute is the first state-backed organisation focused on AI safety and has received a government contribution of £100 million.

AISI is chaired by Ian Hogarth and has hired researchers such as Google DeepMind’s Geoffrey Irving to conduct testing on and evaluations of established and emerging AI systems. Researchers at the Institute will work on testing AI systems to build a body of evidence on the risks from advanced AI. OpenAI, Google Deepmind and Meta are some of the companies that have signed up voluntarily to open up their latest AI systems for review by the Institute.

Tech groups such as OpenAI currently undertake their own safety research but there is no consistency or transparency. By encouraging these groups to share their AI systems, governments will better understand the technology and therefore be able to develop more effective policy and regulation of AI.

Although backed by government, the Institute is not a regulatory body. Instead, it will inform and complement the UK’s regulatory approach to AI.

UK’s regulation of AI

In its white paper published last year, the UK Government introduced its principle-based, ‘pro-innovation’ approach to AI regulation. The government is seeking to implement a more flexible approach, empowering existing regulators to come up with tailored approaches for their specific sectors.

To ensure that regulators can respond to rapid developments in AI, the Institute will share its findings with policymakers, regulators, private companies and the US Safety Institute as part of the agreement.

Donelan insisted that the UK did not plan to regulate the technology any further as it was evolving too rapidly but instead the Institute, and the collaboration of information with the US, would allow policymakers to take an evidence-based, proportionate response to regulating AI.

Looking Forward

This agreement is a landmark moment in history for AI Safety and shows an international commitment to tackling the risks posed by advanced AI. However, the commitment made by some is potentially significantly more than others with the UK Government investing £100million pounds into the UK AI Safety Institute while the US Government invests only $10million.

Donelan rejected the idea that the US is failing to pull its weight and highlights the extensive expertise that it is offering as part of the deal. Despite this assurance, the US Safety Institute is not yet set up and so one wonders how bilateral this agreement really will be.

STAY INFORMED