Steganography evals — SR2025 Agenda Snapshot
One-sentence summary: evaluate whether models can hide secret information or encoded reasoning in their outputs, such as in chain-of-thought scratchpads, to evade monitoring.
Theory of Change
if models can use steganography, they could hide deceptive reasoning, bypassing safety monitoring and control measures. By evaluating this capability, we can assess the risk of a model fooling its supervisors.
Broad Approach
behavioral
Target Case
worst-case
Orthodox Problems Addressed
A boxed AGI might exfiltrate itself by steganography, spearphishing, Superintelligence can fool human supervisors
Key People
Antonio Norelli, Michael Bronstein
Funding
Anthropic (and its general funders, e.g., Google, Amazon)
Estimated FTEs: 1-10
Critiques
Chain-of-Thought Is Already Unfaithful (So Steganography is Irrelevant): Reasoning Models Don’t Always Say What They Think.
See Also
ai-deception-evals, chain-of-thought-monitoring
Outputs in 2025
5 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: steganography-evals (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Steganography evals) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- ai-deception-evals
- chain-of-thought-monitoring
- agi-metrics
- ai-scheming-evals
- autonomy-evals
- capability-evals
- other-evals
- sandbagging-evals
- self-replication-evals
- situational-awareness-and-self-awareness-evals
- various-redteams
- wmd-evals-weapons-of-mass-destruction
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]