The Alignment Project by UK AISI
Mojmir, Benjamin Hilton, Jacob Pfau, Geoffrey Irving, Joseph Bloom, Tomek Korbak, … (+2 more) — 2025-08-01 — UK AI Security Institute — LessWrong
Summary
UK AISI announces a £15 million global fund for AI alignment and control research, presenting a detailed research agenda across 11 disciplinary areas from information theory to interpretability, with emphasis on underrated approaches in the safety community.
Source
- Link: https://lesswrong.com/posts/wKTwdgZDo479EhmJL/the-alignment-project-by-uk-aisi-1
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- control — Black-box safety (understand and control current model behaviour) / Iterative alignment