AI Safety Institute

AI Safety Institutes (AISIs) are government-funded bodies that evaluate frontier AI models for dangerous capabilities and develop the technical foundations for AI regulation. The first two were announced by the United States and United Kingdom around the November 2023 AI Safety Summit. By 2024, an international network had formed.

Origins

Both the UK AISI (under the Department for Science, Innovation and Technology) and the US AISI (under NIST, within the Department of Commerce) were announced in late 2023. Their stated mandate is technical: pre-deployment evaluation of frontier models, threat modeling for misuse and loss-of-control scenarios, and informing government policy with engineering-grade evidence.

US–UK Memorandum of Understanding (April 2024)

On 1 April 2024, US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan signed a formal MoU committing the two AISIs to jointly develop advanced AI model testing — a direct follow-on from Bletchley Park commitments.

International Network

At the May 2024 AI Seoul Summit, the UK announced an international network of AI Safety Institutes spanning ~10 countries plus the EU, intended to share information, tooling, and evaluation methodology. New AISIs followed in Japan, Singapore, France, Canada, Kenya, and others. The UK AISI also opened a San Francisco office to be physically close to leading frontier labs.

UK AISI: Systemic AI Safety Fast Grants

In May 2024, DSIT announced £8.5 million in funding for systemic AI safety research, led by Christopher Summerfield and Shahar Avin, in partnership with UK Research and Innovation. The programme focuses on sociotechnical risks beyond the individual-model frame — closer to the systemic safety research direction in Unsolved Problems in ML Safety.

Significance

AISIs represent a structural innovation in AI governance: rather than rely on industry self-regulation or general regulators trying to grow technical depth, governments built dedicated technical bodies. Their evaluation reports increasingly carry weight in pre-deployment decisions at frontier labs. Their existence partly resolves what ai-governance frames as the central challenge of “maintaining technical competence in regulatory bodies.”

Limitations are real: AISIs have evaluation access via voluntary lab agreements, not statutory authority; their findings inform but do not bind deployment decisions; staffing and compute remain modest compared to frontier-lab budgets.

Connection to This Wiki