Centre for Long-Term Resilience (CLTR)
The Centre for Long-Term Resilience is an independent UK think tank with a mission to transform global resilience to extreme risks, with primary focus on AI risks, biosecurity, and government risk management. CLTR is the most established UK-based policy organization explicitly focused on AI catastrophic risk.
Overview
- Founded: ~2020 (“set up five years ago” as of 2025 sources)
- Status: Independent, non-partisan
- Geographic focus: UK with international engagement
- Funding: Founders Pledge among others
AI Policy Work
Per CLTR’s AI Policy & Research page, the organization:
- Works directly with the UK Government and the wider AI policy community
- Develops and implements best-practice governance recommendations to protect against catastrophic AI risks
- Has contributed to:
- The Ministry of Defence’s AI Strategy report recognizing AI as an extreme risk
- The refreshed UK Biosecurity Strategy
- Extensions to the National Security Risk Assessment time horizon
- The 2024 report How the UK Government can govern the risk of loss of control
Risk Areas
CLTR addresses risks from AI in two distinct registers:
- Misuse — AI-enabled bioweapons, disinformation, cyber attacks
- Loss of control / unintended behaviors — particularly in national security and critical infrastructure
- Power concentration — socioeconomic AI consolidation
Significance
CLTR is the UK’s policy-side counterpart to the UK AI Safety Institute’s technical-evaluation role. While AISI does evaluations, CLTR drives the legislative and governance work. Together with the Future of Life Institute (Brussels) and CeSIA (Paris), it forms the policy-think-tank tier of European AI safety institutional capacity.
Connection to This Wiki
- The UK node in the European AI catastrophic-risk policy network.
- Anchor for governance work upstream of the UK AI Safety Summit 2023 and the post-Bletchley AISI ecosystem.
Related Pages
- ai-safety-institute
- ai-safety-summit-2023
- future-of-life-institute
- cesia
- enais
- ai-governance
- ai-safety
- biosecurity
- existential-risk
- summary-ai-xrisk-belgium-europe
- lawzero