Skip to main content

Summer Deadline: Sunday, March 29 @ 11:59pm PT. Click to apply.

MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered

MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered

December 1, 2025

We introduce MALIBU (Multi-Agent LLM Implicit Bias Uncovered), a benchmark for evaluating implicit biases in multi-agent LLM systems. MALIBU systematically probes how biases emerge, amplify, and propa...

Accepted to Building Trust in LLMs @ ICLR 2025

Authors: Imran Mirza, Cole Huang, Ishwara Vasista, Rohan Patil

We introduce MALIBU (Multi-Agent LLM Implicit Bias Uncovered), a benchmark for evaluating implicit biases in multi-agent LLM systems. MALIBU systematically probes how biases emerge, amplify, and propagate when multiple LLM agents interact in collaborative decision-making scenarios. Our framework reveals that multi-agent configurations can amplify individual model biases by 15-40% compared to single-agent baselines.

Begin Your Journey

The application takes 10 minutes and is reviewed on a rolling basis. We look for strong technical signal—projects, coursework, or competition results—and a genuine curiosity to do real research.

If admitted, you will join a structured pipeline with direct mentorship to take your work from ideation to top conference submission at venues like NeurIPS, ACL, and EMNLP.