Data quality for alignment — SR2025 Agenda Snapshot

One-sentence summary: Improves the quality, signal-to-noise ratio, and reliability of human-generated preference and alignment data.

Theory of Change

The quality of alignment is heavily dependent on the quality of the data (e.g., human preferences); by improving the “signal” from annotators and reducing noise/bias, we will get more robustly aligned models.

Broad Approach

engineering

Target Case

average

Orthodox Problems Addressed

Superintelligence can fool human supervisors, Value is fragile and hard to specify

Key People

Maarten Buyl, Kelsey Kraus, Margaret Kroll, Danqing Shi

Funding

Anthropic, Google DeepMind, OpenAI, Meta AI, various academic groups

Estimated FTEs: 20-50

Critiques

A Statistical Case Against Empirical Human-AI Alignment

See Also

synthetic-data-for-alignment, scalable oversight, assistance-games-assistive-agents, model-values-model-preferences

Outputs in 2025

5 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: data-quality-for-alignment (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.