Open Philanthropy
Open Philanthropy is a major grantmaking organization aligned with the effective-altruism movement and one of the largest funders of ai-safety research in the world. Co-founded by holden-karnofsky and Cari Tuna, Open Philanthropy identifies outstanding giving opportunities across cause areas including global health, animal welfare, biosecurity, and — increasingly — AI safety and governance.
AI Safety Funding
Open Philanthropy has been instrumental in building the institutional infrastructure of the AI safety field. Through its technical AI safety grantmaking — previously led by ajeya-cotra — the organization has funded research at academic institutions, independent labs, and non-profit organizations working on ai-alignment, interpretability, ai-governance, and related topics.
The organization’s funding decisions are shaped by the effective-altruism framework of evaluating cause areas by scale, neglectedness, and tractability. AI safety scores highly on all three dimensions in Open Philanthropy’s analysis: the potential impact of transformative-ai is enormous, the field remains relatively neglected compared to its importance, and recent progress suggests tractability.
Key People
- holden-karnofsky — Co-founder; also co-founded givewell; now works at anthropic while maintaining involvement with Open Philanthropy’s strategic direction.
- ajeya-cotra — Former lead of technical AI safety grantmaking; known for her biological anchors AI timelines framework and work on deceptive-alignment. Now at metr.
- julian-hazell — Grants officer for AI safety; published a widely-cited list of ten AI safety projects needing more people to work on (July 2025), including an explicit call for AI safety living literature reviews (see ten-ai-safety-projects-julian-hazell).
Relationship to GiveWell
Open Philanthropy originated as a project within givewell before spinning off as an independent organization. While GiveWell focuses on evidence-based global health interventions with measurable outcomes, Open Philanthropy takes on higher-risk, higher-reward cause areas where evidence is less certain but potential impact is enormous — including AI safety, biosecurity, and longtermism-oriented research.
Significance
Open Philanthropy’s role in the AI safety ecosystem extends beyond direct funding. Its cause prioritization analyses have helped legitimize AI safety as a mainstream philanthropic focus, and its research (particularly Cotra’s timelines work) has shaped how the broader effective-altruism community thinks about the urgency of AI risk. The organization represents one of the most important bridges between EA philosophy and concrete AI safety funding.
Related Pages
- effective-altruism
- ai-safety
- ai-alignment
- ai-governance
- longtermism
- givewell
- holden-karnofsky
- ajeya-cotra
- metr
- transformative-ai
- 80k-podcast-ajeya-cotra-transformative-ai
- 80k-podcast-holden-karnofsky-concrete-safety
- anthropic
- deceptive-alignment
- interpretability
- rob-wiblin
- 80k-podcast-ajeya-cotra-ai-deception
- 80k-podcast-holden-karnofsky-ai-takeover
- julian-hazell
- ten-ai-safety-projects-julian-hazell
- summary-bostrom-existential-risk-priority