Compute Governance
Compute governance is the set of policies and mechanisms that regulate access to computational resources required for advanced AI development — chips, training infrastructure, cloud compute. The AI Safety Atlas (Ch.4.2) treats it as the most promising governance target because it uniquely satisfies all three criteria from governance-problems: measurable, controllable, meaningful.
Why Compute Is the Strongest Governance Lever
| Criterion | Compute |
|---|---|
| Measurable | FLOPs precise; physical traces (data centers, energy) |
| Controllable | NVIDIA / TSMC / ASML chokepoints |
| Meaningful | Directly constrains what AI can be built |
Supply Chain Concentration
The AI chip supply chain has structural chokepoints:
- NVIDIA — ~80% of AI training GPU market
- TSMC — dominant chip fabrication
- ASML — monopoly on EUV lithography (Netherlands; relevant to Todd’s geographic-positioning argument)
Concentration of who uses compute:
- Private companies: >80% of global AI computing capacity
- Governments + academia: <20%
- AWS + Microsoft + Google: ~65% of cloud computing
Strategic implication: target specialized AI chips, not general-purpose hardware. “By targeting only the most advanced AI-specific chips, we can address catastrophic risks while leaving the broader computing ecosystem largely untouched.”
Tools of Compute Governance
Compute Thresholds
- US Executive Order on AI — notification required for >10²⁶ operations
- EU AI Act — risk assessments for >10²⁵ operations
Monitoring
Frontier training leaves observable footprints:
- Energy consumption (most reliable; hundreds of MW)
- Network traffic patterns
- Hardware procurement records
- Cooling/thermal signatures
- Power substation construction
KYC for Cloud Compute
Cloud providers between hardware and developers serve as natural regulatory chokepoints. KYC requirements analogous to financial-services KYC.
On-Chip Controls
Active mechanisms built into hardware:
- Usage limits — cap compute for unauthorized AI workloads
- Secure logging — tamper-resistant chip-usage records
- Location verification — chips operate only in approved facilities
- Safety interlocks — auto-pause if conditions aren’t met
Parallels existing security primitives (Intel SGX, TPMs).
Export Controls
US semiconductor export controls explicitly designed to restrict China’s access to advanced compute. The most-deployed compute-governance instrument to date.
Limitations
Algorithmic Efficiency Erosion
“The same compute achieves more capability over time.” Llama-3 8B outperforms Falcon 180B. Reasoning/inference-time scaling improves capabilities without changing training compute. Static compute thresholds become unreliable. This is the structural reason adaptive governance (if-then-commitments) is necessary.
Domain-Specific Risks
Specialized models in narrow domains (bio, cyber) may develop dangerous capabilities below typical compute thresholds.
Power Concentration
Restrictive controls accelerate concentration — only large orgs can afford frontier compute. Gap between large tech and academic researchers widens, reducing independent oversight.
Inference Challenges
Trained models run on much less compute than training required → controlling existing-model deployment is harder than controlling new training.
Distributed Training
Currently requires concentrated compute (communication-bound). Algorithmic advances could split training across smaller facilities, reducing compute-governance effectiveness.
Strategic Role
Compute governance is not the only governance lever — it’s the most effective initial screening mechanism. “Identifying models warranting further scrutiny rather than serving as the sole regulatory determinant.”
Most effective when triggering downstream oversight:
- Notification requirements
- Risk assessments
- Capability evaluations
- Deployment licensing
Must integrate with corporate (frontier-safety-frameworks), national (eu-ai-act), and international (ai-safety-institute) initiatives. Standalone compute governance is insufficient against systemic risks.
Connection to Wiki
- governance-problems — the criterion framework
- effective-compute — the technical decomposition
- ai-governance — parent concept
- eu-ai-act — operational compute thresholds
- atlas-ch1-capabilities-09-appendix-forecasting — chip supply-chain economics
- summary-substack-benjamin-todd — Netherlands/ASML positioning argument
- information-security — chip-level security primitives
- atlas-ch4-governance-02-compute-governance — primary source
Related Pages
- governance-problems
- effective-compute
- ai-governance
- eu-ai-act
- ai-safety-institute
- frontier-safety-frameworks
- if-then-commitments
- information-security
- ai-safety-atlas-textbook
- atlas-ch1-capabilities-09-appendix-forecasting
- summary-substack-benjamin-todd
- atlas-ch4-governance-02-compute-governance
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- AI Safety Atlas Ch.1 — Appendix: Forecasting — referenced as
[[atlas-ch1-capabilities-09-appendix-forecasting]] - AI Safety Atlas Ch.4 — Compute Governance — referenced as
[[atlas-ch4-governance-02-compute-governance]]