Machine Intelligence Research Institute (MIRI)

The Machine Intelligence Research Institute (MIRI) is a research organization focused on ensuring that artificial intelligence systems are safe and beneficial. Founded by eliezer-yudkowsky and led by nate-soares as executive director, MIRI has been one of the earliest and most persistent voices warning about the risks of advanced AI.

History and Mission

MIRI was established to conduct technical research on the long-term safety of artificial intelligence. It predates much of the current AI safety ecosystem, having begun its work when concern about advanced AI risk was considered fringe by mainstream computer science. The organization distributes eliezer-yudkowsky’s Rationality: From AI to Zombies through intelligence.org, offering ebook versions on a pay-what-you-want basis.

MIRI’s intellectual stance is characterized by a relatively pessimistic assessment of alignment difficulty. The organization has consistently argued that aligning advanced AI systems is deeply technically challenging and that current approaches — including techniques like RLHF — may be insufficient for ensuring safety at the level of superintelligent systems. This places MIRI at the more cautious end of the ai-safety spectrum.

Key Contributions

MIRI and its researchers have been instrumental in developing or popularizing several foundational concepts in ai-alignment:

  • The orthogonality thesis — Intelligence and goals are independent; a superintelligent AI need not share human values.
  • Instrumental convergence — Most goals lead to similar sub-goals (self-preservation, resource acquisition), making advanced AI potentially dangerous regardless of its terminal goals.
  • The difficulty framing — The consistent argument that alignment is not merely hard but may be hard in ways that current research paradigms cannot address.

nate-soares’s technical discussions on key alignment difficulties, published on lesswrong and the alignment-forum, represent MIRI’s perspective that the field needs to grapple honestly with the possibility that existing approaches may be fundamentally inadequate.

Position in the AI Safety Ecosystem

MIRI occupies a distinctive niche: it is more technically focused than policy-oriented organizations, more pessimistic than industry safety teams like those at anthropic, and more research-oriented than the career-advising work of 80000-hours. Its willingness to challenge prevailing optimism about alignment progress makes it an important counterweight in the field.

The organization’s intellectual influence extends well beyond its direct research output. Through lesswrong, the alignment-forum, and the broader rationalist community, MIRI’s framing of the alignment problem has shaped how an entire generation of researchers thinks about AI risk.

Significance for This Wiki

MIRI represents the deep-technical and cautious pole of the AI safety movement. Understanding its perspective — that alignment may be harder than most people think, and that current approaches may not scale — is essential for understanding the full range of positions in the ai-alignment debate.