About Us

Our background

Oxford AI Safety Initiative, or OAISI (“oh-ay-see”), was spun out of OxAI’s Safety and Governance team in 2024. We believed the AI safety community at Oxford would benefit from an independent organisation focused on supporting its work.

Oxford is home to many capable of contributing to AI safety in Oxford who haven’t yet been introduced to the field or aren’t working on it. This includes technical researchers, generalists, founders, policymakers, and others. The goal of our events and programming is to help you deepen your understanding and make a contribution.

Our executive committee

FAQs & Resources

  • As a society, we primarily focus on catastrophic risks posed by advanced AI systems. For more detail on what that entails, this paper provides a good overview.

  • We recommend aisafety.info as a good place to start - it offers a series of introductory articles.

    If you’re looking for an introduction to different concepts in AI Safety, you might find Rob Miles’ YouTube channel useful. For a more in-depth, up-to-date and structured course, BlueDot Impact run an excellent introductory course - you can browse the curriculum here.

    We also encourage you to browse the selection of courses listed here. See also the resources we list below.

  • Yes! Some high-level familiarity with the AI training process and key concepts in AI Safety is very helpful, but you can acquire these without formally studying AI or ML. As we discuss here, “AI safety is a sociotechnical issue: we support both governance and technical work, and we run programmes to build skills in both”. Some of our activities are aimed at experienced researchers, but we also have more introductory programmes which assume no prior technical knowledge.

  • You might want to check whether it’s close to one of the objections considered on this site, in this article or in the Appendix of this paper. If you’d like to chat about any other uncertainties you have about AI Safety, please do get in touch.

  • For frequently updated compilations of resources, you might be interested in Arkose’s, BlueDot Impact’s, and AISafety.com’s lists.

    If you want to keep up to date with developments in transformative AI, and AI Safety in particular, we recommend the Don’t Worry About the Vase, Transformer, ACX, and Import AI blogs and the 80,000 Hours, Dwarkesh, AXRP, and Inside View podcasts.

  • If you haven’t done so already, we recommend BlueDot Impact’s AI Safety Fundamentals for understanding AI safety from first principles, especially the courses and readings which focuses on catastrophic risks. You can either formally enrol in these (they operate on a cycle) or self-study.

    If you’re already familiar with the basics, consider putting in an application for FIG, MARS and SPAR to start building your research portfolio.

    If you’re just getting into AI Safety at the start of the long vacation, please don’t hesitate to reach out to us! We might be able to put you in touch with formal opportunities, introduce you to safety researchers doing some cool projects, or provide more informal guidance, so that you’ll be able to hit the ground running at the start of Michaelmas! We also recommend perusing our Resources section above.

  • Feel free to put your name in ourvisiting OAISI form!

  • If you have any particular feedback on how we can improve our community and programming, please let us know on our feedback form.

  • If you are interested in helping out in a more intensive, hands-on capacity, we opened up applications for General Committee for the 2025-26 academic year, and would love to see you put in an application!