Clarifying “wisdom”: Foundational topics for aligned AIs to prioritize before irreversible decisions
Source
- Link: https://www.lesswrong.com/posts/EyvJvYEFzDv5kGoiG/clarifying-wisdom-foundational-topics-for-aligned-ais-to
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- agent-foundations — Theory