Laura Weidinger
Laura Weidinger is a research scientist at Google DeepMind working on the ethical and societal risks of large language models. Her most influential contribution is the taxonomy of harms from AI systems, which has shaped how academic and policy communities classify and reason about AI risks.
Research Focus
- Taxonomies of harm from large language models and generative AI systems
- Sociotechnical risk evaluation for frontier models
- Risk taxonomies that bridge technical and policy communities
Her work is regularly cited in AI safety research as a structured way to map the harm space — distinct from existential-risk-only framings, complementary to them.
Connection to LSAIR
Weidinger is a 2026 keynote speaker at the LSAIR International Conference on Large-Scale AI Risks (23–24 June 2026, Leuven).
Connection to This Wiki
- Speaker at the most prominent European academic AI x-risk conference.
- Her taxonomy work informs the wiki’s ai-risk-arguments coverage of harm structure.
Related Pages
- deepmind
- lsair-conference
- ai-safety
- ai-risk-arguments
- summary-ai-xrisk-belgium-europe
- werner-stengg