Opportunity Space: Renormalization for AI Safety
Lauren Greenspan, Dmitry Vaintrob, Lucas Teixeira — 2025-03-31 — PIBBSS — LessWrong
Summary
Research agenda proposing to use renormalization group techniques from physics to develop better interpretability methods for neural networks, including identifying natural scales in models and developing unsupervised feature extraction techniques to surpass SAEs.
Source
- Link: https://lesswrong.com/posts/wkGmouy7JnTNtWAbc/opportunity-space-renormalization-for-ai-safety
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- other-interpretability — White-box safety (i.e. Interpretability)