Light Touch AI Regulation the UK

Last month, the European Parliament adopted the Artificial Intelligence Act, making it the first comprehensive law regulating the use of AI, but with the UK seeking to take a significantly different approach to regulation, who has got it right?

UK Approach

In a white paper published last year, the UK Government introduced its principle-based, ‘pro-innovation’ approach to regulation.

The UK Government has sought to implement a more flexible approach to regulation, empowering existing regulators, such as the Medicines and Healthcare products Regulatory Agency (MHRA) or the Law Society of Scotland, to devise bespoke approaches for specific sectors.

The non-statutory principles allow cohesion across sectors, while allowing regulators the freedom to use their domain-specific expertise to tailor regulation to the specific context in which the AI is used.

The 5 principles underpinning the UK framework are:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The sector-by-sector approach will be supported by a set of central government functions to assist regulatory co-ordination across sectors. The proposed central risk function will allow the government to identify, assess, prioritise and monitor AI risks that may cut across several sectors or rather, fall through the gaps of regulation.

EU Approach

The EU approach is starkly different, favouring a more static and ‘horizontal’ regulation, creating statutory rules for AI across all sectors and applications. The regulation will be monitored and implemented by a central regulatory body.

The EU AI Act operates on a risk-based approach and defines four levels of risk for AI systems:

  1. Unacceptable Risk (ex. government run social scoring)
  2. High Risk (ex. scoring of exams or automated examination of visa applications)
  3. Limited Risk (ex. chatbots)
  4. Minimal or No Risk (ex. spam filters)

The level of regulation will depend on what category the AI system falls under: systems posing an unacceptable risk will be banned; high risk subject to intense scrutiny; and minimal to no risk subject to very little formal regulation.

Unlike the UK framework that regulates the use of AI, the EU AI Act regulates the technology itself.

Who got it right?

The UK framework is not statutory and instead relies on the participation of regulators. Some consider this approach too laissez faire. In contrast, the EU AI Act is a creature of statute and compliance is a legal requirement. Given the existential risks posed by AI, some commentators support a more hard-line approach, while critics argue that it smacks of over-regulation.

The use of existing regulators in the UK allows regulation to be moulded to fit sector specific needs and with the wide-ranging use of AI, many believe that an overarching, blanket approach is not appropriate.

The principle-based approach of UK regulation certainly allows more room for flexibility in an ever-changing world of AI but raises very practical issues relating to boundaries for regulators. There are many fields of use of AI which will involve more than one regulator. Ensuring a cohesive and joined up approach by regulators who may have differing priorities will be a challenge. Equally, ensuring effective regulation without having competing and potentially conflicting approaches to regulation across industries, will take time to achieve. Some critics have also expressed concern that UK regulators may lack the technical AI expertise or resources to adequately address emerging risks.

Conversely, the EU approach is more likely to give rise to consistency in approach and compliance but may not be as well equipped to keep up with the fast-paced nature of AI.

Final thoughts

AI is revolutionary and ever-evolving, making regulation incredibly difficult. With the focus of the EU AI Act on immediate harms, it may risk stifling innovation and the ability to adapt to longer-term harms. The UK framework offers the ability to adapt and evolve alongside AI, but at the risk of overlooking the more immediate harms of AI. Achieving a balance in effective regulation of AI may seem like a tall order but with UK regulators set to publish their AI annual strategic plans by 30 April 2024, the picture may become a little clearer.

STAY INFORMED