Mild optimisation — SR2025 Agenda Snapshot

One-sentence summary: Avoid Goodharting by getting AI to satisfice rather than maximise.

Theory of Change

If we fail to exactly nail down the preferences for a superintelligent agent we die to Goodharting → shift from maximising to satisficing in the agent’s utility function → we get a nonzero share of the lightcone as opposed to zero; also, moonshot at this being the recipe for fully aligned AI.

Broad Approach

cognitive

Target Case

mixed

Orthodox Problems Addressed

Value is fragile and hard to specify

Funding

Google DeepMind

Estimated FTEs: 10-50

Outputs in 2025

4 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: mild-optimisation (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.