Currently Running:

  • AI Strategy Series: a three-part workshop introducing students to key strategic issues in AI safety and governance. Running Saturdays of Weeks 4 (8 November), 6 (22 November), and 7 (29 November). Learn more and apply here.

  • Governance Roundtable: a weekly discussion group exploring the latest developments in frontier AI and their implications for governance and policy. Alternate Tuesdays, 5pm, starting in Week 5. Apply here.

  • Technical Roundtables: a weekly meeting on Monday evenings for researchers to share new developments in the field of technical AGI safety, and discuss over dinner. Apply here.

  • Weekly “Office Hours”: Thursdays 1pm. Grab a coffee on us at Common Ground Cafe! This is an informal “ask us anything” forum where you can drop by to chat about AI safety, learn more about OAISI, or just meet members of our community.

  • A Hackathon, co-hosted with Encode Oxford in October of 2025.

  • ARBOx is our ML safety bootcamp, which most recently ran in January and June 2025.

  • A control reading group following this syllabus. If you’d be interested to set up your own small AI safety-related reading group, please let us know with this form, and we’d be excited to chat.

  • AI safety projects run in a group environment (pizza and project ideas provided!) every Thursday at the Computer Science department.

  • A series of introductory talks with Neel Nanda, Orshi Dobe, Suryansh Mehta, Michael Aird in October of 2024 and 2025.

  • Our Strategy Fellowship ran in Michaelmas of 2024, where participants grew their understanding of AIS metastrategy.

  • A Narrating AI Futures symposium focused on communication, media, and advocacy in AI safety, ran February 2025.

  • An AI Safety Fellowship exploring the fundamentals of AI safety from technical and governance perspectives.

Previous Programmes: