WMD evals (Weapons of Mass Destruction) — SR2025 Agenda Snapshot
One-sentence summary: Evaluate whether AI models possess dangerous knowledge or capabilities related to biological and chemical weapons, such as biosecurity or chemical synthesis.
Theory of Change
By benchmarking and tracking AI’s knowledge of biology and chemistry, we can identify when models become capable of accelerating WMD development or misuse, allowing for timely intervention.
Broad Approach
behaviorist science
Target Case
pessimistic
Key People
Lennart Justen, Haochen Zhao, Xiangru Tang, Ziran Yang, Aidan Peppin, Anka Reuel, Stephen Casper
Funding
Open Philanthropy, UK AI Safety Institute (AISI), frontier labs, Scale AI, various academic institutions (Peking University, Yale, etc.), Meta
Estimated FTEs: 10-50
Critiques
See Also
capability-evals, autonomy-evals, various-redteams
Outputs in 2025
6 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: wmd-evals-weapons-of-mass-destruction (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = WMD evals (Weapons of Mass Destruction)) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- autonomy-evals
- capability-evals
- various-redteams
- agi-metrics
- ai-deception-evals
- ai-scheming-evals
- other-evals
- sandbagging-evals
- self-replication-evals
- situational-awareness-and-self-awareness-evals
- steganography-evals
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]