Skip to main content

Summer Deadline: Sunday, April 12 @ 11:59pm PT. Click to apply.

Aaron Sandoval

Aaron Sandoval

Head of AI Safety

Aaron Sandoval is a researcher in AI control currently working on applying factored cognition in control protocols for scheming models. With support from Redwood Research, he began work on this research agenda through the Pivotal Fellowship, and is continuing with several projects both through Algoverse and independently. He's previously worked on LLM evals and developing open-source ML libraries for maze datasets.

Aaron has experience as a Software Engineer at SpaceX and as a Teaching Assistant at Cornell University, where he developed strong foundations in both applied engineering and academic instruction.

At Algoverse, Aaron manages fellowship operations and provides mentorship to students conducting cutting-edge AI safety research.

Begin Your Journey

The application takes 10 minutes and is reviewed on a rolling basis. We look for strong technical signal—projects, coursework, or competition results—and a genuine curiosity to do real research.

If admitted, you will join a structured pipeline with direct mentorship to take your work from ideation to top conference submission at venues like NeurIPS, ACL, and EMNLP.

Begin Your Journey