Gracie Green's profile picture

Gracie Green - Operations Lead

Hiya, I'm Gracie. I've just graduated in Psychology and Philosophy at Oxford where I generally focussed on AI, consciousness, and ethics. Within AI safety, I've generally taken on operations roles and have worked at OAISI since my first year of university (when we were still OxAI's SGT) as well as working at the Cambridge AI Safety Hub and ARENA. I've carried out research into philosophical theories of alignment, inductive bias, critical thinking, and the non-identity problem. I'm always around for a chat and very up for meeting new people, so feel free to reach out if you'd like to get to know OAISI a little better!

Nick Marsh's profile picture

Nick Marsh - Strategic Lead

Hi, I'm Nick! I'm a third-year CS and Philosophy student. I'm interested in a bunch of areas in AI safety - I've done some mechanistic interpretability and priorities work previously. I'm a research assistant in the Human Information Processing lab here in Oxford, and I run [orchard], a group for people to work on their passion projects. Feel free to reach out if you want to chat about AI safety, philosophy, or anything else!

Louis Thomson's profile picture

Louis Thomson - Committee Member

Hi, I'm Louis! I'm in my masters year doing Computer Science and Philosophy at Oxford, and my main interests include AI [control / safety / ethics], game theory, and cooperative AI. I've previously worked on the following projects (in order of recency): cooperation and alignment in multi-agent games (current project); AI-Control games; behavioural evaluations of deception in LLMs; and active learning in LLMs.
I'm always up for talking about all things CS, philosophy (particularly AI-relevant topics but also continental philosophy), and other entirely-not-work-related stuff like music and games!

Rohan Selva-Radov's profile picture

Rohan Selva-Radov - Committee Member

I'm Rohan, a second-year undergraduate studying PPE at Merton.
In the past I've worked with Daniel Kokotajlo's team developing concrete scenarios about what AI developments over the next five years might mean for the world, and currently I'm spending some time thinking about how we can reduce the risks of malevolent actors misusing future generations of frontier AI models.
I'd be excited to chat about what policy & governance levers seem most promising, and to get a better understanding of the theories of victory are from different approaches in technical alignment.

James Lester's profile picture

James Lester - Committee Member