Mild optimisation — SR2025 Agenda Snapshot
One-sentence summary: Avoid Goodharting by getting AI to satisfice rather than maximise.
Theory of Change
If we fail to exactly nail down the preferences for a superintelligent agent we die to Goodharting → shift from maximising to satisficing in the agent’s utility function → we get a nonzero share of the lightcone as opposed to zero; also, moonshot at this being the recipe for fully aligned AI.
Broad Approach
cognitive
Target Case
mixed
Orthodox Problems Addressed
Value is fragile and hard to specify
Funding
Google DeepMind
Estimated FTEs: 10-50
Outputs in 2025
4 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: mild-optimisation (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Mild optimisation) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- assistance-games-assistive-agents
- black-box-make-ai-solve-it
- capability-removal-unlearning
- chain-of-thought-monitoring
- character-training-and-persona-steering
- control
- data-filtering
- data-poisoning-defense
- data-quality-for-alignment
- emergent-misalignment
- harm-reduction-for-open-weights
- hyperstition-studies
- inference-time-in-context-learning
- inference-time-steering
- inoculation-prompting
- iterative-alignment-at-post-train-time
- iterative-alignment-at-pretrain-time
- model-psychopathology
- model-specs-and-constitutions
- model-values-model-preferences
- rl-safety
- safeguards-inference-time-auxiliaries
- synthetic-data-for-alignment
- the-neglected-approaches-approach
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]