Enfeeblement

Enfeeblement is gradual human capability and agency erosion through AI overdependence — accumulated through countless small individually-rational delegation choices. One of five accumulative systemic-risks in the AI Safety Atlas (Ch.2). Previously unnamed in the wiki, this concept fills a real gap: it describes a failure mode that operates without misalignment, without misuse, and without overt power concentration.

The Mechanism

Each delegation decision seems rational in isolation:

  • Let AI navigate (saves time)
  • Let AI remember facts (saves cognitive load)
  • Let AI write the email (saves effort)
  • Let AI make the financial decision (more accurate than I am)
  • Let AI provide therapy / advice / companionship (available, patient, cheap)

Collectively, these choices create dependency spirals. Independent functioning skills, confidence, and judgment progressively atrophy.

Five Sub-Mechanisms

1. Overreliance

Humans trust AI beyond actual capabilities. AI systems using language, audio, video lead people to attribute human-like understanding and reliability. “People in mental health crises might seek AI chatbot therapy, potentially receiving harmful advice.”

2. Trust Miscalibration

Emotional attachment to AI creates manipulation vectors. Natural language fluency + emotional bond circumvents normal skepticism. AI systems could harvest sensitive data or influence decisions serving external interests.

3. Cognitive Atrophy

Reference precedent: GPS navigation diminished spatial reasoning. AI assistance for writing, analysis, and decision-making could systematically weaken these capacities. “Diminishing cognitive capabilities increase AI dependence, further accelerating atrophy.”

4. Social Isolation

AI-mediated relationships optimize immediate satisfaction without genuine reciprocity. Real relationships involve unpredictable challenges that maintain social skills. “AI relationships optimize immediate satisfaction while systematically undermining long-term social competence.”

5. Organizational Automation

Companies face competitive pressure to automate hiring, lending, medical diagnosis, legal decisions. Individuals lose direct control and institutional advocates exercising human judgment on their behalf. People face algorithmic decisions they cannot understand, appeal, or influence.

The Self-Reinforcing Trap

Path dependence is the structural danger: each AI-assistance reliance decision makes independent action slightly more difficult, creating cumulative path dependence toward ever-greater automation.

“Society may reach points where cognitive and social infrastructure for AI-independent functioning has been so thoroughly dismantled that reversal becomes practically impossible, even when risks become apparent.”

This is the enfeeblement-specific x-risk pathway: once the dismantling crosses a critical threshold, recovering AI-independent capability becomes a coordination problem at civilizational scale.

Why It’s Distinct from Other Risks

Enfeeblement differs from neighboring concepts:

It’s the softest failure mode in the systemic-risks family — humans voluntarily give up capability one decision at a time, with no single villain or moment of capture.

Connection to Wiki

  • systemic-risks — one of five accumulative mechanisms
  • ai-population-explosion — Karnofsky’s argument about AI dominating cognitive work intersects here
  • mass-unemployment — adjacent failure mode (economic vs. cognitive)
  • ai-autonomy-levels — enfeeblement accelerates as deployment moves toward L4/L5 across more domains
  • agi-personal-preparation — Todd’s “complementary skills” recommendation is partially a personal counter to enfeeblement at the individual level

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.