Skip to main content

Spring Deadline: Sunday, February 15 at 11:59 pm PT. Click here to apply.

Shape the futureof ML and AI

The leading AI research program for college and high school students.

Personalized mentorship from researchers at:

Google DeepMind
Meta
Stanford
Anthropic
Berkeley

Publish at Top AI Conferences

1

Identify Pressing Problems

Ideate an impactful research topic for conference submission

2

Start Impactful Research

Ideate an impactful research topic for conference submission

3

Publish at a Top AI Conference

Ideate an impactful research topic for conference submission

Student Spotlight

Abhay Gupta and Philip Meng named 2025 Davidson Fellows

Abhay and Philip were honored with the highly selective Davidson Fellows Scholarship, receiving a $25,000 scholarship for their research. They were 1 of 20 recipients selected from 1,200+ applicants, recognized for their exceptional research and impact.

Their paper, EnDive, was accepted to EMNLP Findings, one of the flagship conferences in NLP. Furthermore their work was cited by researchers at Microsoft, Google, Stanford, Carnegie Mellon, Columbia, Oxford, the University of Washington, and other institutions.

After Algoverse, Philip was admitted to Harvard University.

After Algoverse, Abhay has acquired internships at Stanford, MIT, and Harvard (Reference: LinkedIn), despite coming into the program with no prior experience in AI or research.

Read More Publications
Abhay and Philip Davidson Fellowship research - Image 1
Student Spotlight
OpenAI PaperBench feature - Image 1

Tim, Ryan, Ayush, and Kaylee's paper was featured in OpenAI's PaperBench

In an outstanding recognition of their cutting-edge work, their paper, Semantic Self-Consistency was featured among 20 state-of-the-art AI research papers in OpenAI's PaperBench. OpenAI handpicked these 20 papers from ICML and NeurIPS and reached out to collaborate with our student author, Tim.

Earlier, their paper was also accepted at NeurIPS MATH-AI. Notably, after their NeurIPS presentation, two of the four researchers were admitted to Stanford University.*

*The other two researchers were 1: already accepted to college at the time they joined the project and 2: based in Germany.

Read More Publications

Our Community Spotlights

Hear from students who have published research through our program

View More Spotlights
Srivishnu Ramamurthi

Hired at OpenAI (Software Engineer)

Srivishnu Ramamurthi

Srivishnu's Algoverse research was accepted to the NeurIPS 2025 Efficient Reasoning Workshop, helping strengthen the track record that led to his full-time software engineer offer from OpenAI.

Algoverse provided the technical foundations I was missing and the "hidden" knowledge around how research actually works — which conferences matter, how peer review works, and how to meet real research standards. It gave me the structure to take my first steps as a researcher.

McNair Shah

Selected for the Anthropic Fellowship

McNair Shah

McNair's AI safety research at Algoverse was accepted to the NeurIPS 2025 Mechanistic Interpretability Workshop, contributing to his selection for the Anthropic AI Safety Fellowship.

Algoverse is a great program; the mentors and many of the students in it are incredibly talented. Kevin Zhu is a great program director who's been able to make the program an actual incubator for future researchers that is starkly different from a lot of other programs!

Ryan Li

Admitted to Stanford University

Ryan Li

Through Algoverse, Ryan earned a NeurIPS workshop acceptance, was featured by OpenAI, and was admitted to Stanford.

The lectures, notebooks, and mentorship were actual industry-level quality, and really put into perspective how legit research looks compared to my old stuff. I feel way more confident about paper-reading, writing, and running actual experiments after all this, and seeing the paper finally get accepted was super rewarding… The program was genuinely the highest ROI thing I've done in my entire high school career.

Santiago Torres-Garcia

Admitted to UC Berkeley (Transfer)

Santiago Torres-Garcia

Santiago highlighted his Algoverse research accepted at an ACL workshop in his UC Berkeley transfer application from community college.

Algoverse offered an incredible opportunity minimally available to community college students. The research experience I gained strengthened my UC application, contributing to my acceptance into UC Berkeley's EECS program as a transfer student, a lifelong dream of mine. Our paper was accepted into ACL's REALM'25 Workshop, a prestigious peer-reviewed venue in the field of NLP. This will help me stand out as I pursue research roles, internships, and job opportunities.

Research at Algoverse

Rigorous mentorship. Real-world publication. Our students conduct original AI research and present at top-tier venues alongside Ph.D. researchers.

Equity in Education

The AI Research Program

Immerse yourself in the process of real-world AI research by delving into literature review, developing and implementing your own ML algorithms, communicating your results in a research publication, and submitting research to top AI research conference workshops at NeurIPS, EMNLP, and ACL.

AI Research Program - Student research
AI Research Program - Collaboration
AI Research Program - Conference
AI Research Program - Conference presentation
AI Research Program - Workshop
AI Research Program - Team

Conference Acceptances

Algoverse research teams have consistently achieved publication success at top AI venues such as NeurIPS, EMNLP, and ACL—conferences that primarily feature work from Ph.D. students and professional researchers at leading industry and academic labs. Acceptance rates at these conferences are typically 30-50% for submissions from established research institutions. Algoverse's research teams have achieved comparable results, reflecting the program's emphasis on rigorous mentorship and independent research quality.

To read more about our research outcomes and conference publications, visit our Research page.

Fall 2024

68%

Conference-Accepted Teams

Winter 2024

70%

Conference-Accepted Teams

Spring 2025

71%

Conference-Accepted Teams

Summer 2025

Results Pending

Recent Conference Publications

Recent publications and acceptances at flagship AI conferences (e.g., NeurIPS, ICML, ACL), selected through competitive peer review.

Selected for an Oral Presentation award at the NeurIPS 2025 UniReps Workshop

Shared Parameter Subspaces and Cross-Task Linearity in Emergently Misaligned Behavior

Daniel Aarao Reis Arturi, Eric Zhang, Andrew Ansah, Kevin Zhu, Ashwinee Panda, Aishwarya Balwani

Recent work has discovered that large language models can develop broadly misaligned behaviors after being fine-tuned on narrowly harmful datasets, a phenomenon known as emergent misalignment (EM). However, the fundamental mechanisms enabling such harmful generalization across disparate domains remain poorly understood. In this work, we adopt a geometric perspective to study EM and demonstrate that it exhibits a fundamental cross-task linear structure in how harmful behavior is encoded across different datasets. Specifically, we find a strong convergence in EM parameters across tasks, with the fine-tuned weight updates showing relatively high cosine similarities, as well as shared lower-dimensional subspaces as measured by their principal angles and projection overlaps.

Selected for a Spotlight award at the NeurIPS 2025 Mechanistic Interpretability Workshop

Scratchpad Thinking: Alternation Between Storage and Computation in Latent Reasoning Models

Sayam Goyal, Brad Peters, María E. Granda, Akshath V. Narmadha, Dharunish Yugeswardeenoo, Cole Blondin, Callum S. McDougall, Sean O'Brien, Ashwinee Panda, Kevin Zhu

Latent reasoning language models aim to improve reasoning efficiency by computing in continuous hidden space rather than explicit text, but the opacity of these internal processes poses major challenges for interpretability and trust. We present a mechanistic case study of CODI (Continuous Chain-of-Thought via Self-Distillation), a latent reasoning model that solves problems by chaining "latent thoughts." Using attention analysis, SAE based probing, activation patching, and causal interventions, we uncover a structured "scratchpad computation" cycle: even numbered steps serve as scratchpads for storing numerical information, while odd numbered steps perform the corresponding operation.

Begin Your Journey

Apply in one step. The application is straightforward and takes about 10 minutes. We review submissions on a rolling basis and reach out quickly if there's a fit.

We look for clear signals of technical ability (projects, coursework, competitions, or strong research curiosity), high agency and follow-through, and a genuine curiosity to do real research.

If you're ready to ship experiments and iterate fast, you'll thrive here. If admitted, you'll join a structured research pipeline with mentorship that keeps progress moving from ideation → implementation → conference submission.