How to Prepare for AGI — Benjamin Todd

Source — Benjamin Todd’s Substack post on personal preparation for AGI, written by the founder of 80000-hours.

Context

Most writing about transformative-ai focuses on what society should do — ai-governance, ai-alignment research, ai-safety policy. This article takes a different angle: what can an ordinary individual do to prepare for AGI personally? Todd assumes AGI could arrive within a few years, triggering an intelligence-explosion that transforms life as we know it, and works through the practical implications for personal decisions.

The framing is explicitly conditional: “Whether you find this timeline plausible or not, it’s worth thinking through.” Todd focuses on scenarios where individual action matters, acknowledging that some outcomes (extinction, total transformation) render personal preparation moot.

The Seven Recommendations

1. Seek Out Informed People

During COVID, having access to people who were “ahead of the curve” was invaluable. Todd argues the same applies to the AI transition, but lasting a decade rather than two years. Recommended sources include carl-shulman, paul-christiano, leopold-aschenbrenner, ajeya-cotra, daniel-kokotajlo, the 80,000 Hours Podcast, the Dwarkesh Podcast, Zvi Mowshowitz, The Cognitive Revolution, and Epoch AI. The advice: join a team that’s plugged in, learn as much as you can, and use the most advanced AI models to build intuitions for their capabilities and limits.

2. Save Money

Todd presents a distinctive economic thesis: wages likely rise initially but may later collapse. Once AI deploys capital more efficiently than human labour, there’s no reason to employ most people. At that point, you live on state-provided basic income plus savings. Money also provides option value in radically uncertain times, and investment returns could be very high.

The optimistic case: “most likely everyone is going to become rich by today’s standards” because GDP could grow 100x, making welfare generous. But if any goods remain scarce — land, compute, political influence — personal wealth still matters.

3. Invest for the AGI Transition

Standard 60:40 portfolios may underperform. Todd hints (without giving specific financial advice) that it’s possible to do much better by investing in assets that appreciate in the lead-up to AGI. He also suggests that fixed-rate long-dated debt could be cheap to repay after significant growth. References writing by Zvi Mowshowitz, Dwarkesh Patel, and others on AI-era investing.

4. Learn Complementary Skills

Human labour remains a bottleneck until AI can do everything humans can. Until then, skills that AI can’t do but that are complementary to AI deployment increase in value. Todd’s list:

  • Applying AI to solve real problems
  • Personal effectiveness and productivity
  • Social skills
  • Learning how to learn
  • Entrepreneurship, management, and strategy
  • Technical expertise in ML, robotics, AI hardware, cybersecurity
  • Communications
  • Getting things done in government
  • Complex physical skills (e.g., data centre or robotics construction)

This connects directly to career-capital — the same investment logic but with an AGI-specific filter.

5. Get Citizenship in an AI-Wealthy Country

The US is “by far the best positioned country for AGI.” Close allies with supply-chain relevance get cut into a deal: the EU (especially the Netherlands due to ASML), UK, Japan, South Korea, Australia, and Canada. China is well-positioned if timelines are longer (2030s), but naturalization is extremely rare. The rest of the world still benefits from cheaper AI services, robot-produced goods, and having energy/raw materials needed in the supply chain.

This is a geopolitical framing of ai-governance at the individual level — where you hold citizenship determines your share of AGI wealth.

6. Build Personal Resilience

Todd dismisses “going off-grid” as a meaningful defence against AI risk — “as with going to Mars, it won’t protect you.” However, the intelligence-explosion will likely destabilise the world order, potentially through nukes, bioweapons, cyberattacks, or new WMDs. Having a plan to go somewhere outside major cities with several months of supplies has modest value. More importantly: invest in mental health, groundedness, and the ability to deal with stress. Be part of a group willing to take non-common-sense actions.

7. Prioritise Pre-AGI Goals

The intelligence-explosion will likely either start within six years or take much longer, and “we’ll know a lot more in just three.” This creates a practical heuristic: delay anything you can delay three years; focus on things you’d want in place before an explosion starts.

Examples: if you want to live in both the US and Argentina, do the US now. Consider delaying having children by three years until uncertainty is reduced. If medical technology improves dramatically in 10-20 years, it becomes more important to stay alive until then (avoid dangerous hobbies) but less important to worry about health problems that only matter 20+ years out.

Key Tensions

Todd acknowledges a core tension: all recommendations should also be fine if timelines are much longer. None should be taken to the extreme. “Spending all your money on a final three-year binge” is the wrong approach. The article seeks robustly good actions across scenarios, not bets on a single timeline.

There’s also a tension between personal preparation and social impact. Todd’s other writing focuses on making the transition go well for society — “given the insane stakes, should be where we focus.” This article is explicitly the personal complement to that public-interest work.

Significance

This is one of the few pieces in the knowledge base that addresses the personal agency dimension of AGI preparation. Most sources focus on institutional, technical, or philosophical responses. Todd bridges the gap between existential risk analysis and individual life decisions, applying the same effective-altruism reasoning to personal planning that 80,000 Hours applies to career choice.