Rob Wiblin
Rob Wiblin is the host of the 80000-hours Podcast, one of the most substantive long-form interview series covering ai-safety, ai-alignment, existential-risk, and effective-altruism. Through in-depth conversations with leading researchers and practitioners, Wiblin has made some of the most technical and consequential ideas in AI safety accessible to a broad audience.
The 80,000 Hours Podcast
The podcast features polished transcripts with inline links on each episode page, making it both an audio and text resource. Wiblin’s interview style is notable for its depth — episodes routinely run two to three hours, allowing guests to develop complex arguments fully rather than reducing them to soundbites.
The AI safety episode collection represents some of the most substantive public conversations about existential risk from artificial intelligence, featuring researchers and leaders from openai, anthropic, deepmind, open-philanthropy, redwood-research, the future-of-humanity-institute, and other key organizations.
Key Interviews
Wiblin’s AI safety interviews span the field’s major themes:
- Core alignment research — Conversations with paul-christiano on iterative-amplification and jan-leike on superalignment.
- AI risk scenarios and forecasting — Two episodes with ajeya-cotra on transformative-ai timelines and deceptive-alignment, and two with holden-karnofsky on concrete safety measures and AI takeover scenarios.
- Safety engineering and governance — buck-shlegeris on ai-control, nick-joseph on responsible-scaling-policy, and Nova DasSarma on information security.
- Constructive critique — ben-garfinkel scrutinizing classic AI risk arguments while supporting expanded safety work.
Significance
Wiblin’s podcast has become a primary entry point for technically literate audiences seeking to understand AI safety. Several episodes — particularly the paul-christiano interview — are widely cited as the best introductions to AI alignment available. The podcast’s consistent quality across dozens of AI-focused episodes has made 80000-hours a central node in the AI safety information ecosystem.
Related Pages
- 80000-hours
- effective-altruism
- ai-safety
- ai-alignment
- 80k-podcast-index
- ai-control
- ajeya-cotra
- anthropic
- ben-garfinkel
- buck-shlegeris
- deceptive-alignment
- deepmind
- existential-risk
- future-of-humanity-institute
- holden-karnofsky
- iterative-amplification
- jan-leike
- nick-joseph
- open-philanthropy
- openai
- paul-christiano
- redwood-research
- responsible-scaling-policy
- superalignment
- transformative-ai
- 80k-podcast-paul-christiano