Daniel Ziegler

ML safety researcher at OpenAI. Ziegler focuses on empirical, engineering-driven approaches to AI alignment, particularly reward-learning and rlhf.

Work

Ziegler shares catherine-olsson’s emphasis on prototyping and experimentation as a research methodology. Rather than purely theoretical alignment work, he advocates for implementing alignment ideas in code, running experiments, and iterating rapidly between theory and practice. This approach surfaces unexpected challenges that only emerge during implementation — a form of empirical safety research.

Along with Olsson, Ziegler makes the case that the AI safety field needs ML engineers as much as alignment theorists. Practical safety work — implementing prototypes, running evaluations, testing alignment techniques — requires engineering skills that are more broadly accessible than PhD-level theory.