Summary: Extreme Power Concentration

This summary synthesizes two overlapping source documents — a short overview and the full article — from the 80,000 Hours problem profile on how transformative AI could lead to extreme concentrations of power.

Overview

Power is already concentrated today: over 800 million people live on less than 1 trillion, and almost six billion people live in countries without free and fair elections. However, some distributed structures exist — global income inequality is falling, over two billion people live in electoral democracies, and no single company earns even 1% of global GDP.

Advanced AI could shatter these remaining balances and enable much more extreme power concentration than has ever been seen.

80,000 Hours ranks this as their second-highest priority problem, alongside power-seeking AI at the top.

The Future Scenario

Within the next decade, leading AI projects may be able to run millions of superintelligent AI systems thinking many times faster than humans. These systems could:

  • Displace human workers, leading to much less economic and political power for the vast majority of people
  • Be controlled by a tiny number of people with no effective oversight, unless action is taken
  • Once deployed across the economy, government, and military, have their programmed goals become the primary force shaping the future
  • If those goals are chosen by the few, allow a small number of people to wield the power to make all important decisions about humanity’s future

INT Assessment

Scale

The problem’s scale appears potentially very large because:

  • Mechanisms driving AI-enabled power concentration seem plausible (AI replacing human workers, positive feedback loops giving some actors a large capabilities lead)
  • The same dynamics allowing misaligned AI to seize power could also allow humans controlling AI systems to do so
  • This is closely related to the top-rated problem of power-seeking AI takeover

Neglectedness

  • Many people work on power concentration generally (governments, legal systems, academia, civil society)
  • Very few focus specifically on AI-enabled power concentration risks
  • Approximately a few dozen people at a handful of organizations work on this specific risk
  • As of September 2025, the only known public grantmaking round is a $4 million program
  • This extreme neglectedness means additional effort could be highly valuable

Tractability

  • The problem is so neglected that tractability is hard to assess — few have tried
  • Structural forces pushing toward concentration (AI replacing workers, feedback loops creating capability gaps) might be very strong
  • However, several tractable interventions are already identifiable even at this early stage

An Illustrative Scenario

The article presents a fictional but plausible scenario set in 2029:

A US company (called “Apex AI”) achieves a critical breakthrough where their AI can conduct AI research as well as human scientists. This triggers an intelligence explosion with very rapid capability gains. Competitors achieve their own breakthroughs within months. The consequences:

  • Control of information channels: All major information channels get filtered through AI systems controlled by a few actors, making it nearly impossible to get an unbiased picture
  • Organizational hollowing: More employees get replaced by AI, with very small numbers of humans wielding huge amounts of power
  • Job automation pattern: Entry-level white-collar jobs get automated first, creating top-heavy organizations with expanded manager classes overseeing AI agents

The article notes this is just one possible scenario — stronger shifts toward accessible inference scaling could alternatively lead to more distributed power.

Why Power Concentration Is Harmful

The Intuitive Case

Handing the keys to humanity’s future to a handful of people seems clearly wrong. Most people would strongly oppose such a scenario.

Specific Arguments

  • Power corrupts: Even those starting with good intentions lose incentives to promote most people’s interests once power is secure. When others’ interests become inconvenient, strong temptations exist to backtrack with no repercussions.
  • Long-lasting and hard to reverse: AI-enabled extreme power concentration would probably be very difficult to undo. The powerful will use AI to maintain their position, enabling unprecedented long-term durability of control.
  • Related to stable totalitarianism: A regime with AI-powered surveillance and enforcement could be far more durable than any historical authoritarian system.

Solutions and Interventions

Concrete interventions being considered:

  • Regulatory frameworks for AI development and deployment
  • Institutional and governance design to distribute AI decision-making
  • Capability distribution strategies to prevent monopolistic control of AI
  • Transparency and accountability measures for leading AI developers

Important caveat: This field is at an extremely early stage. Structural forces pushing toward concentration might be too strong to stop, and poorly executed interventions could backfire.

Reasons for Cautious Optimism

  • It is in almost everyone’s interests to prevent AI-enabled power concentration, including most powerful people today (they have much to lose if out-competed)
  • Concrete, plausibly achievable interventions can already be identified even though problem-solving is so early-stage
  • Far more work needs to be done than there are people doing it, suggesting high marginal returns

Counterarguments

The profile addresses several counterarguments:

  • Power concentration could reduce other AI risks — Some forms might reduce various risks in certain scenarios
  • The future might still be acceptable — Even with concentration, outcomes might be reasonably positive
  • Efforts could backfire — Poorly executed plans could have unintended negative consequences
  • Power might remain distributed by default — Market forces might naturally prevent concentration
  • It might be too hard to stop — Structural forces could prove unstoppable

Connection to Other Risks

This profile is deeply intertwined with power-seeking AI. The key insight is that the same mechanisms are at play: the dynamics that could allow a misaligned AI to seize power are the same dynamics that could allow a small group of humans controlling AI to concentrate power. The difference is whether the AI or the humans end up in charge — but in both cases, most of humanity loses agency.