Roman Yampolskiy

Roman V. Yampolskiy is a computer scientist at the University of Louisville and one of the founding voices of “AI safety engineering” as a discipline. He coined the term at the 2011 PT-AI conference (Philosophy and Theory of AI) and has been a consistent advocate for treating AI safety as a research area in its own right.

Notable Work

  • “Artificial Intelligence Safety Engineering: Why Machine Ethics is a Wrong Approach” (2013) — early formal articulation of the AI safety engineering paradigm
  • “Artificial Intelligence Safety and Cybersecurity: A Timeline of AI Failures” (2016) — empirical documentation of AI system failures
  • AI: Unexplainable, Unpredictable, Uncontrollable (book) — accessible synthesis of his impossibility-results work on safe ASI
  • Multiple papers on impossibility results for safe artificial superintelligence

Connection to LSAIR

Yampolskiy is a 2026 keynote speaker at the LSAIR International Conference on Large-Scale AI Risks (23–24 June 2026, Leuven) — co-organized by KU Leuven and FLI.

Connection to This Wiki

  • Cited in ai-safety as the originator of “AI safety engineering” (2011).
  • LSAIR 2026 keynote anchoring the academic-philosophical x-risk programme in Leuven.