AI deception evals — SR2025 Agenda Snapshot
One-sentence summary: research demonstrating that AI models, particularly agentic ones, can learn and execute deceptive behaviors such as alignment faking, manipulation, and sandbagging.
Theory of Change
proactively discover, evaluate, and understand the mechanisms of AI deception (e.g., alignment faking, manipulation, agentic deception) to prevent models from fooling human supervisors and causing harm.
Broad Approach
behavioral / engineering
Target Case
worst-case
Orthodox Problems Addressed
Superintelligence can fool human supervisors, Superintelligence can hack software supervisors
Key People
Cadenza, Fred Heiding, Simon Lermen, Andrew Kao, Myra Cheng, Cinoo Lee, Pranav Khadpe, Satyapriya Krishna, Andy Zou, Rahul Gupta
Funding
Labs, academic institutions (e.g., Harvard, CMU, Barcelona Institute of Science and Technology), NSFC, ML Alignment Theory & Scholars (MATS) Program, FAR AI
Estimated FTEs: 30-80
Critiques
A central criticism is that the evaluation scenarios are “artificial and contrived”. the void and Lessons from a Chimp argue this research is “overattributing human traits” to models.
See Also
situational-awareness-and-self-awareness-evals, steganography-evals, sandbagging-evals, chain-of-thought-monitoring
Outputs in 2025
13 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: ai-deception-evals (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = AI deception evals) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- chain-of-thought-monitoring
- sandbagging-evals
- situational-awareness-and-self-awareness-evals
- steganography-evals
- agi-metrics
- ai-scheming-evals
- autonomy-evals
- capability-evals
- other-evals
- self-replication-evals
- various-redteams
- wmd-evals-weapons-of-mass-destruction
- lie-and-deception-detectors
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]