AI companies are unlikely to make high-assurance safety cases if timelines are short
Ryan Greenblatt — 2025-01-23 — Anthropic — LessWrong / AI Alignment Forum
Summary
Argues that frontier AI companies are unlikely (<20%) to successfully make high-assurance safety cases for TEDAI systems if timelines are within 4 years, due to difficulties with security, scheming detection, and insufficient AI acceleration of safety work.
Source
- Link: https://lesswrong.com/posts/neTbrpBziAsTH5Bn7/ai-companies-are-unlikely-to-make-high-assurance-safety
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- control — Black-box safety (understand and control current model behaviour) / Iterative alignment