Summary: 80,000 Hours Podcast — Nova DasSarma on Information Security and AI
Overview
In this episode of the 80,000 Hours Podcast, Nova DasSarma makes the case that information-security is a foundational pillar of AI safety — one that is critically underfunded and underappreciated relative to its importance. The core argument: even if alignment research succeeds and safety-conscious organizations develop responsible AI, none of that matters if adversarial actors can steal model weights, training data, or research insights and deploy them without safety measures.
Information Security as AI Safety Infrastructure
DasSarma reframes information security (infosec) not as a peripheral concern but as a prerequisite for every other safety strategy. The argument follows a clear chain:
- AI alignment research aims to make models safe.
- Responsible scaling policies aim to deploy models carefully.
- AI control techniques aim to manage potentially misaligned models.
- All of these are defeated if model weights are stolen and deployed by actors without any safety commitments.
This makes infosec a “load-bearing” component of the entire AI safety stack. A lab with perfect alignment research but weak security is not safe — it is a source of uncontrolled proliferation.
Threat Model: State-Level Adversaries
DasSarma focuses on the highest-capability threat: nation-state actors targeting AI labs. State-level adversaries bring resources, sophistication, and persistence that far exceed typical cybersecurity threats:
- Intelligence agencies with billions in budget and decades of experience in technical espionage.
- Persistent access — state actors can maintain covert access to compromised systems for months or years.
- Supply chain attacks — compromising hardware, software dependencies, or cloud infrastructure used by AI labs.
- Human intelligence — recruiting insiders at AI companies through incentives, coercion, or ideology.
DasSarma argues that current AI lab security practices are woefully inadequate for this threat level. Most labs are securing against cybercriminals, not intelligence agencies, which is the wrong threat model given the geopolitical significance of frontier AI capabilities.
The Gap Between Current Practice and What Is Needed
The episode identifies a significant gap between where AI labs are today and where they need to be:
- Security culture — Most AI researchers have little security training and do not think adversarially about protecting their work.
- Infrastructure hardening — Lab computing infrastructure is not designed to withstand state-level attacks.
- Operational security — Basic operational security practices (compartmentalization, need-to-know access, secure communications) are often absent or poorly implemented.
- Incident response — Labs may not detect breaches quickly enough to limit damage.
Career Paths in AI-Related Information Security
DasSarma discusses the career opportunity in AI-related infosec, noting that:
- The field is talent-starved — there are far more positions that need filling than qualified candidates.
- Security expertise from other domains (government, finance, critical infrastructure) transfers well.
- The work has outsized impact because it protects all other safety work.
- It is a field where even incremental improvements (better access controls, improved monitoring, security training for researchers) can meaningfully reduce risk.
Connection to Other Safety Work
The infosec perspective connects to several other episodes in the collection:
- Holden Karnofsky explicitly mentions model weight theft as a key risk in his AI takeover episode.
- Nick Joseph discusses information security as part of Anthropic’s responsible scaling policy.
- Buck Shlegeris’s AI control work assumes models are being operated by the developing lab — model theft bypasses all control mechanisms.
Significance
This episode fills a critical gap in AI safety discussions, which tend to focus on alignment, governance, or interpretability while treating security as an implementation detail. DasSarma’s argument that infosec is load-bearing — that it undergirds every other safety strategy — reframes the field’s priorities.
The episode is also notable for its career guidance. For people who want to contribute to AI safety but do not have ML research backgrounds, information security is a high-impact path that leverages different skill sets (systems security, penetration testing, incident response, threat modeling) than the typical alignment research career.