Alignment first, intelligence later

Chris Lakin — 2025-03-30 — Softmax — Substack (Locally Optimal)

Summary

Argues for a ‘teleological’ (purpose-first) approach to AI alignment and introduces Softmax, a new company developing multi-agent RL systems where alignment emerges organically from agent interdependence.

Source