Steganography evals — SR2025 Agenda Snapshot

One-sentence summary: evaluate whether models can hide secret information or encoded reasoning in their outputs, such as in chain-of-thought scratchpads, to evade monitoring.

Theory of Change

if models can use steganography, they could hide deceptive reasoning, bypassing safety monitoring and control measures. By evaluating this capability, we can assess the risk of a model fooling its supervisors.

Broad Approach

behavioral

Target Case

worst-case

Orthodox Problems Addressed

A boxed AGI might exfiltrate itself by steganography, spearphishing, Superintelligence can fool human supervisors

Key People

Antonio Norelli, Michael Bronstein

Funding

Anthropic (and its general funders, e.g., Google, Amazon)

Estimated FTEs: 1-10

Critiques

Chain-of-Thought Is Already Unfaithful (So Steganography is Irrelevant): Reasoning Models Don’t Always Say What They Think.

See Also

ai-deception-evals, chain-of-thought-monitoring

Outputs in 2025

5 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: steganography-evals (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.