Problems with instruction-following as an alignment target
Seth Herd — 2025-05-15 — LessWrong
Summary
Analyzes instruction-following as a likely alignment target for early AGI, identifying four major problems: instrumental convergence against shutdown, jailbreaking vulnerabilities, human unreliability in control, and unpredictable effects of mixed training objectives.
Source
- Link: https://lesswrong.com/posts/CSFa9rvGNGAfCzBk6/problems-with-instruction-following-as-an-alignment-target
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- other-corrigibility — Theory / Corrigibility