Skip to main content

Spring Deadline: Sunday, March 1 @ 11:59pm PT. Click here to apply.

Back to Research
Accepted to Building Trust in LLMs @ ICLR 2025

MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered

Imran Mirza, Cole Huang, Ishwara Vasista, Rohan Patil

Abstract

MALIBU is a novel benchmark developed to assess the degree to which LLM-based multi-agent systems implicitly reinforce social biases and stereotypes. AI models complete tasks within predefined contexts, and their responses undergo evaluation by an LLM-based multi-agent judging system in two phases. In the first phase, judges score responses labeled with specific demographic personas across four metrics. In the second phase, judges compare paired responses assigned to different personas, scoring them and selecting the superior response. The study quantifies biases in LLM-generated outputs, revealing that bias mitigation may favor marginalized personas over true neutrality, emphasizing the need for nuanced detection, balanced fairness strategies, and transparent evaluation benchmarks in multi-agent systems.

Citation

Imran Mirza, Cole Huang, Ishwara Vasista, Rohan Patil. "MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered". Accepted to Building Trust in LLMs @ ICLR 2025.

Details

Conference
Accepted to Building Trust in LLMs @ ICLR 2025
Authors
4 authors

Publish Your Research

Join Algoverse and work with world-class mentors to publish at top AI conferences.

Start Your Application