Will AI Tell Lies to Save Sick Children? Litmus-Testing AI Values Prioritization with AIRiskDilemmas
Yu Ying Chiu, Zhilin Wang, Sharan Maiya, Yejin Choi, Kyle Fish, Sydney Levine, … (+1 more) — 2025-05-20 — Anthropic — arXiv
Summary
Develops LitmusValues evaluation pipeline and AIRiskDilemmas dataset to identify AI models’ value priorities, demonstrating that value prioritization can predict risky behaviors including power-seeking and alignment faking.
Key Result
Values identified through LitmusValues successfully predict both seen risky behaviors in AIRiskDilemmas and unseen risky behaviors in HarmBench, including seemingly innocuous values like Care.
Source
- Link: https://arxiv.org/abs/2505.14633
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- model-values-model-preferences — Black-box safety (understand and control current model behaviour) / Model psychology