Resources and FAQ
What kind of AI-related risks are OAISI members most concerned about?
As a society, we primarily focus on catastrophic risks posed by advanced AI systems. For more detail on what that entails, this paper provides a good overview.
Where can I learn the basics?
We recommend aisafety.info as a good place to start - it offers a series of introductory articles.
If you’re looking for an introduction to different concepts in AI Safety, you might find Rob Miles’ YouTube channel useful. For a more in-depth, up-to-date and structured course, BlueDot Impact run an excellent introductory course - you can browse the curriculum here.
We also encourage you to browse the selection of courses listed here. See also the resources we list below.
Can I get involved if I don’t have a technical background?
Yes! Some high-level familiarity with the AI training process and key concepts in AI Safety is very helpful, but you can acquire these without formally studying AI or ML. As we discuss here, “AI safety is a sociotechnical issue: we support both governance and technical work, and we run programmes to build skills in both”. Some of our activities are aimed at experienced researchers, but we also have more introductory programmes which assume no prior technical knowledge.
I heard this specific argument against working on AI Safety - has anyone in the AI Safety community considered it?
You might want to check whether it’s close to one of the objections considered on this site, in this article or in the Appendix of this paper. If you’d like to chat about any other uncertainties you have about AI Safety, please do get in touch.
Resources
For frequently updated compilations of resources, you might be interested in Arkose’s, BlueDot Impact’s, and AISafety.com’s lists.
If you want to keep up to date with developments in transformative AI, and AI Safety in particular, we recommend the Don’t Worry About the Vase, Transformer, ACX, and Import AI blogs and the 80,000 Hours, Dwarkesh, AXRP, and Inside View podcasts.
What can I be doing over the vacation to learn more about AI Safety?
- If you haven’t done so already, we recommend BlueDot Impact’s AI Safety Fundamentals for understanding AI safety from first principles, especially the courses and readings which focuses on catastrophic risks. You can either formally enrol in these (they operate on a cycle) or self-study.
- If you’re already familiar with the basics, consider putting in an application for FIG, MARS and SPAR to start building your research portfolio.
- If you’re just getting into AI Safety at the start of the long vacation, please don’t hesitate to reach out to us! We might be able to put you in touch with formal opportunities, introduce you to safety researchers doing some cool projects, or provide more informal guidance, so that you’ll be able to hit the ground running at the start of Michaelmas! We also recommend perusing our Resources section above.