Julian Hazell
Julian Hazell is a grants officer at open-philanthropy, focused on reducing catastrophic risks from transformative AI. He publicly shares his views on promising AI safety projects via his Substack Secret Third Thing.
Position and Role
Hazell makes grants at Open Philanthropy to talented people working on projects that could reduce catastrophic risks from transformative AI. He has held this role for at least two years as of mid-2025. His views, while informed by his role, are explicitly personal and not official Open Philanthropy policy.
Threat Model
Hazell states his threat model directly: he believes there is a real chance that AI systems capable of causing a catastrophe (including human extinction) will be developed within the next decade. This belief drives his focus on funding work in this space.
Key Contribution: Ten AI Safety Projects (July 2025)
In July 2025, Hazell published a widely-cited post listing ten AI safety projects he’d like more people to work on. The list functions as a funder’s signal of gaps in the AI safety ecosystem. Project #6 — “AI Safety Living Literature Reviews” — is particularly significant as it explicitly names living literature review as an under-explored model for synthesizing AI safety research, and links to an open-philanthropy RFP for this work.
The other nine projects span: AI security field-building, technical AI governance research, in-the-wild AI agent monitoring, AI safety communications consultancy, AI lab monitoring, $10B resilience planning, AI fact-checking tools, AI auditors, and AI economic impact tracking.
See ten-ai-safety-projects-julian-hazell for the full breakdown.
Relevance to AI Safety Atlas
Hazell’s explicit call for “AI safety living literature reviews” — maintained by experts, continuously updated, covering key safety research agendas — is the direct precedent for the AI Safety Atlas project planned in this wiki. His post is the primary source for that citation in the design documents.