Alternative Risk Categories
The standard AI risk taxonomy (risk-decomposition) frames severity along an Individual → Catastrophic → Existential spectrum. The AI Safety Atlas (Ch.2) introduces two off-axis severity types that don’t fit this spectrum: i-risks (loss of meaning) and s-risks (astronomical suffering).
Ikigai Risks (i-risks)
Named after the Japanese concept of ikigai (life’s purpose), i-risks involve scenarios where humans survive and prosper but lose meaning and purpose. These emerge when AI systems become more capable than humans at all meaningful activities.
The structural argument:
- Humans might lose their sense of purpose when AI can create better art, conduct better research, perform better at every task that traditionally gave life meaning
- Unlike extinction or suffering risks, i-risks involve scenarios where humans are safe and materially comfortable but existentially adrift
Proposed mitigations and their problems:
- Creating artificial constraints preserving human relevance — but raises authenticity questions
- Finding new forms of purpose from human-AI collaboration — but uncertain whether artificially-preserved meaning satisfies human psychological needs
i-risks overlap with enfeeblement (cognitive atrophy reduces capability and sense of agency) but are conceptually distinct: enfeeblement is about capability loss; i-risks are about meaning loss.
Existential Suffering Risks (s-risks)
s-risks involve astronomical amounts of suffering that could vastly exceed all suffering in human history. Distinguishes from extinction risks (which eliminate experience entirely) — s-risks create futures filled with terrible suffering.
Scenarios the Atlas mentions:
- Future civilizations create vast numbers of artificial sentient beings that experience genuine suffering if created carelessly
- Digital slavery — trillions of artificial minds performing computational labor under terrible conditions
- Detailed evolutionary simulations or consciousness experiments inadvertently creating millions of suffering beings within them
- Simulated beings experiencing genuine suffering even though they exist only as computational processes
The s-risk argument concedes its science-fictional surface but argues “the potentially enormous stakes involved and the irreversible nature of such outcomes if they occurred” warrant consideration.
Why These Categories Matter
The standard Individual/Catastrophic/Existential severity axis maps onto how many humans are harmed and how badly. i-risks and s-risks add two orthogonal dimensions:
- Severity-of-harm-to-non-humans (s-risks) — what about astronomical numbers of artificial sentients?
- Quality-of-existence-for-survivors (i-risks) — what about humans who are physically safe but existentially gutted?
These are minority concerns within AI safety but inform a broader moral landscape. They connect to longtermism and population-ethics questions: when assessing futures, are we evaluating only “human survival” or also “what kind of future is being created.”
Connection to Wiki
- risk-decomposition — the standard framework these supplement
- existential-risk — i-risks and s-risks complicate the standard “extinction or permanent disempowerment” definition
- longtermism — the philosophical context where these concerns are most legible
- population-ethics — relevant to s-risks (moral weight of artificial sentients)
- derek-parfit — Parfit’s work on personal identity and population ethics is foundational
- enfeeblement — adjacent to i-risks but conceptually distinct
These categories are not central to the wiki’s main focus, but they appear in serious AI safety discussion (especially around summary-ea-in-age-of-agi and EA s-risks discourse). This page exists to provide the disambiguation when readers encounter the terms.
Related Pages
- risk-decomposition
- existential-risk
- longtermism
- population-ethics
- derek-parfit
- enfeeblement
- summary-ea-in-age-of-agi
- atlas-ch2-risks-01-risk-decomposition
- ai-safety-atlas-textbook
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- AI Safety Atlas Ch.2 — Risk Decomposition — referenced as
[[atlas-ch2-risks-01-risk-decomposition]]