Strategy Fellowship
Applications have now closed for the MT24 iteration. We hope to run the strategy fellowship again in HT25 - sign up to our mailing list to keep updated!
If you’d like to apply for or attend something that is closed, let us know! It may be that we still have space and you can join late, it may be that we’ll be running it again very soon. Fill this in and we’ll let you know.
Over 6 weeks, we’ll discuss key strategic parameters to help you develop your approach to AI risk.
- A 6 week reading group, from Week 3 to 8 of Michaelmas ‘24
- 2 hours commitment per week
- Exploring timelines, scaling, institutions and technical approaches in AI safety
Target audience:
- Those currently working on technical AI safety or AI governance, who want to understand how their work interacts with timelines and other strategic parameters
- Students and researchers who have only a little familiarity with the field of AI safety and want to understand the various approaches in a principled way
Content (subject to change):
- Timelines and scaling: the scaling hypothesis; biological anchors, the direct approach; trends in compute; takeoff models
- Threat models: what are the labs trying to build?; takeover; the difficulty of alignment
- The state of play: the chip supply chain; the labs’ plans; the role of researchers
- Partisan approaches: Situational Awareness; the difference between the US and China
- Slowing and containing: plausibility of slowing; race dynamics