Applications are now open. Apply by 8 May 2026.
We're excited to be running our fourth iteration of ARBOx (Alignment Research Bootcamp Oxford), a 2-week intensive designed to rapidly build skills in AI safety. This year, we're running two concurrent streams for the first time.
ARBOx4 is an in-person, full-time programme running from 28 June to 10 July 2026 at Trajan House in Oxford. We have run three successful ARBOx iterations previously, and are excited to re-open this opportunity for another promising cohort of motivated individuals. ARENA and ARBOx alumni have gone on to become MATS scholars, LASR participants, AI safety engineers at organisations like Apollo Research, Anthropic, METR, and OpenAI, and even founders of their own AI safety initiatives.
Programme Details
We provide Central Oxford accommodation for non-Oxford residents, lunch, and potential support with travel expenses. The programme is free for all admitted participants.
1. Technical Stream
During the programme, you'll follow a compressed version of the ARENA syllabus, build gpt-2-small from scratch, learn interpretability techniques, understand RLHF, and replicate a paper or two. There will be lectures in the morning covering aspects of the syllabus, and afternoon pair-programming with TA support. During lunch break, there will be short talks from experts in the field, and we will host socials in the evenings. By the end, you'll have hands-on experience with ML and safety techniques, a concrete mini project, and clear next steps for contributing to AI safety.
Syllabus
We'll cover:
Building gpt-2-small from scratch
Attention
Replicating a paper or two (e.g. Redwood's IOI paper)
A brief introduction to RL and RLHF
Who should apply?
We're looking for applicants with basic familiarity with linear algebra, Python programming, and AI safety (e.g., having completed AI Safety Fundamentals or another fellowship). Application is competitive, though: many more applicants meet the technical requirements than we have places.
You do not need to be an Oxford student to participate (though we'd love to see a large number of applications from Oxford students!). The programme is designed to upskill participants in ML safety, targeting those who would benefit from this training, regardless of their background.
We'd encourage you to apply if you're eager to work on technical AI safety and satisfy the technical prerequisites. If you have relevant graduate or professional experience, this is also a plus! It's easy to underestimate your abilities and your potential to contribute to the field, and so we suggest you err on the side of applying.
2. Generalist Stream (Pilot)
During the programme, you'll work through a series of structured, hands-on activities designed to build the skills and context needed for high-impact generalist work in AI safety. Each day combines taught content — a guest speaker or organiser-led session — with a collaborative afternoon activity: think writing sprints, tabletop exercises, grant evaluation tasks, and mini-hackathons. During lunch, there will be short talks from practitioners in the field, and we'll host socials in the evenings. By the end, you'll have a clearer sense of where you can have the most impact, concrete experience across a range of generalist AI safety tasks, and a network of peers and professionals to carry that forward with.
Syllabus
We’ll cover topics like:
AI risk communication
Project and grant evaluation
Org design and scoping
Admissions, hiring, and talent
Who should apply?
We're looking for broadly value-aligned people with strong generalist potential: advanced undergrads, master's students, and early-career professionals. You don't need a technical background. If you're not sure whether this is the right fit, err on the side of applying.
Note that whether or not we run this stream is undecided and depends on application interest.
If you need a decision by a certain date (e.g. for visa reasons, you have a competing offer), please mention this in the application.
…
Please share this with others in your network who you think might be interested - applications are open to all! If you have someone in mind, feel free to mention them in our referral form.
If you would like to express interest in future iterations of ARBOx, feel free to do so here.