AI in Context — YouTube Channel Videos

This summary covers the AI in Context YouTube channel, a video series launched in July 2025 by 80000-hours and hosted by Aric Floyd. The channel provides in-depth discussions on artificial intelligence risks, model behaviors, superintelligence scenarios, and rapid advancements. It grew from zero to over 324,000 subscribers by late 2025, indicating significant public appetite for accessible AI safety content.

Channel Overview

  • Channel: youtube.com/@AI_In_Context
  • Host: Aric Floyd (5 years studying AI risks)
  • Producer: 80000-hours
  • Tagline: “AI moves fast. Let’s get up to speed.”
  • Growth: 5,000 subscribers within two days of launch; 324,000+ by late 2025
  • Total videos: approximately 19 as of late 2025

The channel’s rapid growth demonstrates that there is a large audience for serious, hype-free discussion of AI risks — a space that was previously underserved by mainstream media.

Key Videos

1. Welcome to AI In Context

The introductory video where Aric Floyd explains his background (5 years studying AI risks), the channel’s mission, and its approach. Key themes: the need for public involvement in AI discussions, the importance of honest and hype-free conversation about AI, and why the general public — not just researchers and policymakers — should engage with AI safety topics.

2. We’re Not Ready for Superintelligence (The AI 2027 Scenario Explained)

The channel’s breakout video (178k+ views shortly after release). Breaks down the AI 2027 scenario from the AI Futures Project. Topics covered:

  • Expert forecasts of rapid AI transformation
  • AI agents and their implications for the economy
  • Job displacement from advanced AI
  • R&D acceleration loops (AI improving AI research)
  • Security concerns with advanced AI systems
  • Misalignment risks — how powerful AI systems might not do what we want
  • Includes an interview with Daniel Kokotajlo (a key AI forecaster) and counterarguments to the scenario

This video connects directly to the content in the library’s Transformative AI folder, particularly the AI 2027 overview.

3. AI Could Be a Tool for Global Control (Plus Other Major AI Risks)

Covers AI risks beyond the standard “rogue AI” narrative:

  • Malicious use: intentional weaponization of AI by bad actors
  • Accidents: unintended harms from AI systems
  • Undermining democracy: AI-powered disinformation and manipulation
  • Cyberattacks on infrastructure: AI-enhanced attacks on critical systems
  • Bioterror barriers lowered: AI making it easier to create biological weapons
  • Power concentration: AI enabling unprecedented concentration of control
  • Large-scale accidents: catastrophic failures from bugs, flaws, or weak safety measures

This video provides a broader risk taxonomy than the typical AI safety focus on alignment alone, connecting to themes in summary-ea-forum-key-posts and summary-ea-ai-books.

Significance for the Library

AI in Context represents the public communication arm of the AI safety movement. While the academic papers (academic-papers-index), books (summary-ea-ai-books), and forum posts (summary-ea-forum-key-posts) target audiences already engaged with EA and AI safety, this channel reaches a general audience. Its rapid subscriber growth suggests the ideas in this library are breaking through to mainstream awareness.

The channel is also notable as an example of 80000-hours’s evolving strategy — moving beyond career advice and written content into video production aimed at a mass audience.