CeSIA — French Center for AI Safety

CeSIA (Centre pour la Sécurité de l’IA) is France’s leading AI safety think tank and expertise center. Established in May 2024 in Paris by EffiSciences, it describes itself as “the French voice of AI safety” and works to prevent major risks from advanced AI through technical research, policy advocacy, and education.

Leadership

Mandate

CeSIA’s public mission covers:

  1. Policy outreach to French and EU policymakers — contributed to the EU AI Act Code of Practice
  2. Technical research, including the BELLS benchmark for evaluating LLM safeguard reliability
  3. Education and field-building — ML4Good bootcamps, university courses, mentoring
  4. Partnerships with the OECD on AI safety

Notable Outputs

  • BELLS benchmark (2024) — assesses LLM supervision systems. Found that every major safeguard fails to detect jailbreaks effectively on at least one dataset, with detection rates dropping below 34.2% — directly evidencing the limits of current safeguard evaluation.
  • Engagement with the EU AI Action Summit (Paris, February 2025) — CeSIA published its analysis “The Summit for the Inaction on AI Safety” arguing the summit underdelivered on safety commitments.
  • Co-authored the Global Call for AI Red Lines (September 2025) — alongside The Future Society and the Center for Human-Compatible AI (CHAI).
  • Active engagement with the EU General-Purpose AI Code of Practice under the EU AI Act.

Significance

CeSIA fills the institutional gap that AIS Brussels and ENAIS aim to fill in their respective contexts: a permanent, well-staffed AI safety organization with paid staff, a clear research agenda, and policy access. It is the closest French analogue to the UK’s Centre for Long-Term Resilience.

Connection to This Wiki

  • The French node in the European AI safety institutional landscape mapped in summary-ai-xrisk-belgium-europe.
  • Co-author of the AI Red Lines call cited in ai-safety as a 2025 governance milestone.
  • Originator of EffiSciencesML4Good bootcamp programme.