AI Safety Atlas Ch.4 — Appendix: National Governance
Source: Appendix: National Governance
Major powers have developed distinct regulatory philosophies — rights protection, geopolitical competition, social control — creating a fragmented governance landscape that complicates global coordination. See national-ai-governance.
Foundation: Three Required Mechanisms
A comprehensive domestic AI safety regime requires:
- Development of safety standards
- Regulatory visibility (ASPIRE)
- Compliance enforcement
The Atlas points to nuclear safety regulation as a blueprint:
- Standardized safety frameworks for AI behavior/decision-making
- Independent supervision mechanisms
- Regular protocols and incident-response exercises
- Information sharing mechanisms across industries
European Union — Rights-Centric Horizontal Framework
EU AI Act — formally adopted March 2024, the world’s first comprehensive AI legal framework. Risk-based four-tier classification:
- Unacceptable risk — banned (e.g., behavior manipulation)
- High risk — strict requirements (critical infrastructure, education, employment)
- Limited risk — transparency measures
- Minimal risk — largely unregulated
Implementation timeline:
- February 2, 2025 — prohibitions on specific AI practices + staff AI literacy requirements
- August 2, 2025 — General-Purpose AI provider obligations (documentation, data transparency)
Enforcement: European AI Office can impose fines up to 3% of global annual turnover or €15 million (whichever higher). Approach prioritizes citizens’ rights through transparency, anti-discrimination, prohibitions on social scoring.
The EU AI Office is “positioned to shape global AI governance similarly to how GDPR restructured privacy standards” — see brussels-effect.
United States — Geopolitical Competition Focus
US AI governance shifted significantly after 2024 election. Trump’s Executive Order 14179 revoked Biden’s October 2023 AI safety EO (which required advanced developers to share safety test results with government).
Current direction: federal agencies remove barriers to innovation, ensure AI is “free from ideological bias or engineered social agendas.” Separate EO on AI Infrastructure prioritizes national security, economic competitiveness, domestic data centers, workforce standards.
Distinctive approach: leverage hardware/compute control. Home to NVIDIA, AMD, Intel — exercises legislative control via semiconductor export controls explicitly designed to restrict China’s access.
Constraint: congressional gridlock → relies on executive orders and agency actions.
Key US actions:
- US/China Semiconductor Export Controls (October 2022)
- Blueprint for an AI Bill of Rights (October 2022)
- Executive Order on AI (October 2023, since revoked)
- OMB Memo M-25-21 (February 2025) — accelerate federal AI adoption
State level: California’s SB 1047 vetoed September 2024; new SB 53 (whistleblower protection) introduced.
China — Vertical Iterative Regulation
Distinctive vertical, iterative approach — targeted regulations for specific AI domains, contrasting EU’s comprehensive horizontal framework.
Regulatory evolution:
- August 2021 — Algorithmic Recommendation Provisions (world’s first mandatory algorithm registry)
- November 2022 — Deep Synthesis Provisions (synthetic content)
- August 2023 — Interim Measures for Generative AI (risk-based, public-opinion focus)
- 2024 — AI safety elevated to national security level
- March 2025 — AI-Generated Content Labeling Measures (mandatory labels)
Distinctive features:
- Focus on algorithms with social-influence potential, not domains like healthcare/employment
- Broad, non-specific language extends interpretation/enforcement authority to Cyberspace Administration of China
- Providers must uphold “socialist core values”
- Building toward proposed comprehensive Artificial Intelligence Law (2023+)
Implementation: Shanghai and Beijing AI safety labs (2024); 40+ government-backed AI safety evaluations conducted. Inward focus — primarily regulates Chinese organizations. Major Western labs (OpenAI, Anthropic) don’t actively serve Chinese consumers due to censorship-compliance refusal.
Comparative Summary
| Jurisdiction | Primary Focus | Mechanism |
|---|---|---|
| EU | Individual rights protection | Horizontal comprehensive framework |
| US | Geopolitical competition with China | Hardware export controls + executive orders |
| China | Social control + government value alignment | Vertical iterative algorithm-focused regulation |
The result: fragmented global governance landscape that complicates international coordination — exactly the obstacle Ch.4’s main text identifies.
Connection to Wiki
This appendix is the comparative-governance reference for the wiki. Connections:
- national-ai-governance — dedicated concept page
- eu-ai-act — substantially expanded with implementation timeline + penalties
- brussels-effect — the EU’s outsized influence
- ai-governance — the parent concept
- atlas-ch4-governance-04-governance-architectures — the architectural frame