Currently running:
- Technical Roundtables: a weekly meeting on Monday evenings for researchers to share new developments in the field of technical AGI safety, and discuss over dinner. Apply here.
Catastrophic risk from advanced artificial intelligence may be the defining issue of our time.
OAISI’s role is to support the AI safety community in Oxford. You can find out more about our mission here, and see recommended resources on how to get involved here.
If you’re based in Oxford and interested in AI safety, or an AI safety researcher visiting for a short time, do send us an email at [email protected].
To get regular updates on what we’re doing, sign up to our mailing list.
Go to our get in touch page to book 1-1 sessions with our committee to chat about AI safety, OAISI, and how you can get involved.
Previous:
- ARBOx is our ML safety bootcamp, and we will run the second iteration from the 30th of June till the 11th of July. Express interest in future iterations here.
- Control reading group: following this syllabus. If you’d be interested to set up your own small AI safety-related reading group, please let us know with this form, and we’d be excited to chat!
- Projects: work on an AI safety project in a group environment (pizza and project ideas provided!) every Thursday 6-9pm during term at the Computer Science department. Will resume in MT25.
- A series of introductory talks with Orshi Dobe, Suryansh Mehta, and Michael Aird at the beginning of October
- Our Strategy Fellowship ran in Michaelmas, where participants grew their understanding of AIS metastrategy.
- A symposium focused on communication, media, and advocacy in AI safety.
- An AI Safety Fellowship exploring the fundamentals of AI safety from technical and governance perspectives