OAISI AI Safety Fellowship HT25
We’re running our flagship introductory fellowship programme, allowing selected fellows to explore the strategic, technical and governance issues posed by advanced AI systems. This will (provisionally) replace the Strategy Fellowship run in Michaelmas 2024, which has similar aims and scope.
Apply here to participate/facilitate by January 22nd (Wednesday Week 1)
Following a 3-3 structure, our fellows will explore the fundamentals of AI safety for the first three weeks before specialising through either our technical or governance track for the later three weeks!
Details:
- Expect to commit 2hrs/week for six weeks (conditional on acceptance into Part II) starting in Week 2
- Each cohort will comprise 5-8 participants, guided by a facilitator.
- All reading is completed within the session, there is no time commitment outside of the session itself.
Structure:
- The first three weeks (Part I) cover high-level strategic considerations around risks from advanced AI systems.
- The second three weeks (Part II) cover the most promising avenues for risk mitigation, either from a technical or governance perspective. Continuing fellows join the specialist cohorts in Part II based on their background and interests.
We expect to admit participants from a range of backgrounds, including:
- Those with a technical/ML or policy background who want to more deeply understand and evaluate the case for working on extreme risks from AI.
- Those who are concerned by extreme risks from AI, and want to systematically develop their understanding of the field.
We’re also looking for facilitators!
- Facilitators can choose to facilitate one or both parts of the programme (3 or 6 weeks). We pay all facilitators an hourly rate.
- Facilitators are responsible for arranging meeting times with participants, answering their logistical or follow-up object-level questions on Slack (within reason), and providing printed readings / snacks. Meeting locations tbc.