Alignment first, intelligence later
Chris Lakin — 2025-03-30 — Softmax — Substack (Locally Optimal)
Summary
Argues for a ‘teleological’ (purpose-first) approach to AI alignment and introduces Softmax, a new company developing multi-agent RL systems where alignment emerges organically from agent interdependence.
Source
- Link: https://chrislakin.blog/p/alignment-first-intelligence-later
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- aligning-what — Multi-agent first