Research at Algoverse
Learn about our mission, explore our acclaimed conference publications, and delve into past student research papers.
Our Commitment to Quality Research
Algoverse AI Research is dedicated to empowering students to create authentic and impactful AI research. Our distinctive emphasis on quality and process enables our students to produce exceptional research published at leading NLP conferences worldwide. We strive to push the boundaries of large language models (LLMs) on standard benchmarks while pioneering machine learning applications across diverse disciplines. This commitment to innovation and excellence sets us apart from other programs.
Our PhD mentors have extensive experience conducting cutting-edge research at top AI institutions and research labs around the globe. They are deeply invested in each student's project, providing essential mentorship in scoping research proposals, implementing code, and academic writing. Through this guidance, our students are uniquely equipped to produce high-quality research papers and successfully navigate the publication process at prestigious conferences. Past papers of our students have been cited by researchers at Microsoft, Oxford, and University of Washington.
Conference Publications
Our NeurIPS Publications
Neural Information Processing Systems (NeurIPS) is widely recognized as the most prestigious conference in artificial intelligence and machine learning. Publications at NeurIPS represent groundbreaking contributions and are commonly associated with leading universities and industry leaders like Google DeepMind. See more at the NeurIPS official website or view its ranking via Google Scholar Google Scholar.
Note for high schoolers: NeurIPS acceptances are significantly more difficult compared to high school science fairs. Less than 0.2% of authors at NeurIPS are high school students.
Translation Bias and Accuracy in Multilingual LLMs for Cross-Language Claim Verification
Accepted to Attribution @ NeurIPS 2024 in Vancouver, Canada
Authors: Aryan Singhal, Veronica Shao, Gary Sun, Ryan Ding
QIANets for Reduced Latency and Improved Inference Times in CNN Models
Accepted to Compression @ NeurIPS 2024 in Vancouver, Canada
Authors: Zhumazhan Balapanov, Edward Magongo, Vanessa Matvei, Olivia Holmberg
Semantic Self-Consistency: Enhancing Language Model Reasoning via Semantic Weighting
Accepted to MathAI @ NeurIPS 2024 in Vancouver, Canada
Authors: Tim Knappe, Ryan Li, Ayush Chauhan, Kaylee Chhua
Fine-Tuning Language Models for Ethical Ambiguity
Accepted to SoLaR @ NeurIPS 2024 in Vancouver, Canada
Authors: Pranav Senthilkumar, Visshwa Bala, Prisha Jain, Aneesa Maity
NusaMT-7B: Machine Translation for Low-Resource Indonesian Languages with LLMs
Accepted to SoLaR @ NeurIPS 2024 in Vancouver, Canada
Author: William Tan
DiversityMedQA: Assessing Demographic Biases in Medical Diagnosis using LLMs
Accepted to AIM-FM @ NeurIPS 2024 in Vancouver, Canada
Accepted to EMNLP Positive Impact Track 2024 in Miami, Florida
Authors: Rajat Rawat, Hudson McBride, Rajarshi Ghosh, Dhiyaan Nirmal, Jong Moon, Dhruv Alamuri
AAVENUE: Detecting LLM Biases on NLU Tasks in AAVE via a Novel Benchmark
Accepted to NeurIPS High School Track 2024 in Vancouver, Canada
Accepted to EMNLP Positive Impact Track 2024 in Miami, Florida
Authors: Abhay Gupta, Philip Meng, Ece Yurtseven