Skip to main content

Spring Deadline: Sunday, March 1 @ 11:59pm PT. Click here to apply.

MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered

MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered

December 1, 2025

MALIBU is a novel benchmark developed to assess the degree to which LLM-based multi-agent systems implicitly reinforce social biases and stereotypes. AI models complete tasks within predefined context...

Accepted to Building Trust in LLMs @ ICLR 2025

Authors: Imran Mirza, Cole Huang, Ishwara Vasista, Rohan Patil

MALIBU is a novel benchmark developed to assess the degree to which LLM-based multi-agent systems implicitly reinforce social biases and stereotypes. AI models complete tasks within predefined contexts, and their responses undergo evaluation by an LLM-based multi-agent judging system in two phases. In the first phase, judges score responses labeled with specific demographic personas across four metrics. In the second phase, judges compare paired responses assigned to different personas, scoring them and selecting the superior response. The study quantifies biases in LLM-generated outputs, revealing that bias mitigation may favor marginalized personas over true neutrality, emphasizing the need for nuanced detection, balanced fairness strategies, and transparent evaluation benchmarks in multi-agent systems.

Begin Your Journey

The application takes 10 minutes and is reviewed on a rolling basis. We look for strong technical signal—projects, coursework, or competition results—and a genuine curiosity to do real research.

If admitted, you will join a structured pipeline with direct mentorship to take your work from ideation to top conference submission at venues like NeurIPS, ACL, and EMNLP.