A Practical Regulatory Landscape for Frontier AI Part 1 – Introduction, International Standards and Key Actors

Introduction

In this series of posts I attempt to lay out a system of regulation for frontier AI systems. This is my current best guess at what a practical system of regulation might look like – there are many ways a system like this could arise and I don’t claim to know how to turn a vision like this into a concrete set of steps to get there.

Nonetheless, I think it’s valuable to think about what a comprehensive system of governance might look like. It is possible that much of the regulation suggested is overreaching and unnecessary – I think it’s more likely that it might not be enough. Laying out a best guess though will help to identify these gaps and redundancies, and at the very least improve my own internal model of a positive future.

I plan to write many posts in this series addressing different aspects of regulation. So far I plan to write about:

  • A framework for model evaluations
  • Licensing hardware and compute
  • Internal governance of AI firms
  • A regulatory regime to create accountability for individuals at AI firms
  • A national AI ombudsman to catalogue public concerns

If there is anything that you think is missing from this vision then please let me know! In formulating this model I have taken a lot of inspiration from the regulation of the finance industry, broader concepts in corporate governance, and ideas from fellow participants in the AI Governance course run by Bluedot Impact.


International Standards

The basis of regulation in many industries comes from the theoretical agreement of some basic principles across a large number of state actors. Notably, the Brussels effect demonstrates that successful regulation of global frontier AI markets can start from one European regime. These principles will be non-technical and non-prescriptive, but set a benchmark against which any regimes by regulatory bodies can be compared. Though it is not in the scope of this article to consider exactly what these principles should be, I have included a few illustrative examples below:

  • Firms should ensure that models remain within the control of humans
  • Firms should ensure that datasets do not discriminate against protected characteristics
  • Firms should ensure that the capabilities of a model are well understood before the model is used for commercial or other benefits.

Agreement on such principles is vital; it will lay down the foundation for any effective governance of powerful machine learning models.


Regulatory Regime Structure

Agreement on broad principles as described above creates an abstract framework for policy – it is the role of governments, industry and independent bodies to take this vision and practically implement it using a set of regulations and standards.

Regulatory Bodies

I believe it is important that nations set up new regulatory bodies to regulate frontier AI. It is difficult to regulate an industry that is so deeply technical and has the potential to impact so many people without specialists dedicated to proactively understanding the field. Similarly, because the standards will need to be high to prevent the proliferation of dangerous models, they will require a lot of human effort to enforce. Though some industries have been effectively regulated through the oversight of publicly traded companies by financial authorities, many of the key players in AI are funded by private investment, so fall out of scope of such rules.

Regulatory bodies in this landscape will serve multiple functions, setting regulation in line with international principles to ensure that:

  1. AI firms maintain strong internal governance procedures
  2. Regulators maintain accurate information about currently trained and deployed models
  3. Models with potentially dangerous capabilities are used responsibly
  4. The interests of society are integral to decision making processes at every stage of development
  5. Key senior staff members at AI firms are held accountable for the misuse and/or negligent application of their models

The information gained as part of function (2) will be shared between regulatory authorities internationally, similar to protocols agreed for enforcement bodies in other industries.

Other Key Actors

A successful regime will require the engagement of many organisations and individuals. A non-exhaustive set of these are described below.

AI Firms

AI firms themselves will be required to proactively engage with regulation and governance of AI. In such a fast moving field, any expertise held by regulators will inevitably be limited. Therefore, it will be to some extent necessary for firms to evaluate the risks created by their proprietary technologies, and to engage proactively with regulators in maintaining strong safety standards. A future post will discuss in more detail what practical internal governance for AI firms might look like.

Accredited Evaluators

Trusted evaluators that exist independently to the firms building potentially dangerous models are vital to the success of any regime. Such evaluators will receive accreditation to conduct this work and reliably convey the results to firms and regulators. This will be explored further in the next post of this series, discussing model evaluation regimes.

Financial Auditors

The engagement of financial auditors will be necessary to ensure that any AI firms producing potentially dangerous models conduct their business following sound corporate governance principles. This is important as a way to enforce good accountability mechanisms within firms, as well as reducing the risk of individual bad actors. It will also limit the extent to which financial incentives can conflict with matters of public safety.

Cybersecurity Consultancies/Experts

It is inevitable that models will be developed that are very useful to society, yet can be repurposed for incredibly dangerous applications by bad actors. State-of-the-art cybersecurity procedures will need to be in place to prevent the exfiltration and proliferation of these models.

AI Ombudsman

As public-facing AI models become more and more integrated into society, it will become useful at a national level for an independent AI Ombudsman to be set up. The ombudsman will interface with the public and become a route through which concerns with model behaviours can be raised, which can then be passed on to both national and international regulators. This will be discussed in more detail in future posts in the series.

Industry Bodies

The fast moving nature of the industry means that regulators will require engagement with firms to better understand what risks are on the horizon and how regulation can be practically implemented. Industry bodies will be important to aggregate many differing opinions into actionable comments.

Benchmarking Institutes

The nature of what “safe” AI looks like going into the future will evolve, and flexible regulation will involve the adoption of moving benchmarks for safety standards. Benchmarking institutes will be necessary to set these standards and keep them up to date as our understanding of models improves.

Academic Institutes

Universities, nonprofits and international organisations are vital for better understanding issues in the field that are of public interest but may not align with the profit seeking motivations of companies.

One response to “A Practical Regulatory Landscape for Frontier AI Part 1 – Introduction, International Standards and Key Actors”

  1. […] This post is part of a larger series for discussing a regulatory landscape for AI. You can find the introduction to the series here. […]

    Like

Leave a comment