Multiple LLM Agents Debate for Equitable Cultural Alignment
- URL: http://arxiv.org/abs/2505.24671v1
- Date: Fri, 30 May 2025 15:01:52 GMT
- Title: Multiple LLM Agents Debate for Equitable Cultural Alignment
- Authors: Dayeon Ki, Rachel Rudinger, Tianyi Zhou, Marine Carpuat,
- Abstract summary: We introduce a Multi-Agent Debate framework, where two LLM-based agents debate over a cultural scenario and collaboratively reach a final decision.<n>We evaluate these approaches on 7 open-weight LLMs (and 21 LLM combinations) using the NormAd-ETI benchmark for social etiquette norms in 75 countries.<n>Experiments show that debate improves both overall accuracy and cultural group parity over single-LLM baselines.
- Score: 39.974611538629304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) need to adapt their predictions to diverse cultural contexts to benefit diverse communities across the world. While previous efforts have focused on single-LLM, single-turn approaches, we propose to exploit the complementary strengths of multiple LLMs to promote cultural adaptability. We introduce a Multi-Agent Debate framework, where two LLM-based agents debate over a cultural scenario and collaboratively reach a final decision. We propose two variants: one where either LLM agents exclusively debate and another where they dynamically choose between self-reflection and debate during their turns. We evaluate these approaches on 7 open-weight LLMs (and 21 LLM combinations) using the NormAd-ETI benchmark for social etiquette norms in 75 countries. Experiments show that debate improves both overall accuracy and cultural group parity over single-LLM baselines. Notably, multi-agent debate enables relatively small LLMs (7-9B) to achieve accuracies comparable to that of a much larger model (27B parameters).
Related papers
- Arbiters of Ambivalence: Challenges of Using LLMs in No-Consensus Tasks [52.098988739649705]
This study examines the biases and limitations of LLMs in three roles: answer generator, judge, and debater.<n>We develop a no-consensus'' benchmark by curating examples that encompass a variety of a priori ambivalent scenarios.<n>Our results show that while LLMs can provide nuanced assessments when generating open-ended answers, they tend to take a stance on no-consensus topics when employed as judges or debaters.
arXiv Detail & Related papers (2025-05-28T01:31:54Z) - WorldView-Bench: A Benchmark for Evaluating Global Cultural Perspectives in Large Language Models [1.094065133109559]
Large Language Models (LLMs) are predominantly trained and aligned in ways that reinforce Western-centric epistemologies and socio-cultural norms.<n>We introduce WorldView-Bench, a benchmark designed to evaluate Global Cultural Inclusivity (GCI) in LLMs by analyzing their ability to accommodate diverse worldviews.
arXiv Detail & Related papers (2025-05-14T17:43:40Z) - Don't Stop the Multi-Party! On Generating Synthetic Multi-Party Conversations with Constraints [11.566214724241798]
Multi-Party Conversations (MPCs) are widely studied across disciplines, with social media as a primary data source due to their accessibility.<n>This work explores the feasibility of generating diverse MPCs with instruction-tuned Large Language Models.
arXiv Detail & Related papers (2025-02-19T10:10:43Z) - When One LLM Drools, Multi-LLM Collaboration Rules [98.71562711695991]
We argue for multi-LLM collaboration to better represent the extensive diversity of data, skills, and people.<n>We organize existing multi-LLM collaboration methods into a hierarchy, based on the level of access and information exchange.<n>We envision multi-LLM collaboration as an essential path toward compositional intelligence and collaborative AI development.
arXiv Detail & Related papers (2025-02-06T21:13:44Z) - Toward Inclusive Educational AI: Auditing Frontier LLMs through a Multiplexity Lens [1.094065133109559]
This paper proposes a framework to assess and mitigate cultural bias within large language models (LLMs)<n>Our analysis reveals that LLMs frequently exhibit cultural polarization, with biases appearing in both overt and subtle contextual cues.<n>We propose two strategies: textitContextually-Implemented Multiplex LLMs, which embed multiplex principles directly into the system prompt, and textitMulti-Agent System (MAS)-Implemented Multiplex LLMs, where multiple LLM agents, each representing distinct cultural viewpoints, collaboratively generate a balanced, synthesized response.
arXiv Detail & Related papers (2025-01-02T11:27:08Z) - Large Language Models Reflect the Ideology of their Creators [71.65505524599888]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.<n>This paper shows that the ideological stance of an LLM appears to reflect the worldview of its creators.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Diversity of Thought Elicits Stronger Reasoning Capabilities in Multi-Agent Debate Frameworks [0.0]
Chain-of-thought prompting, self-verification, and multi-agent debate are proposed to improve the reasoning and factual accuracy of large language models.<n>We find that multi-agent debate helps at any model scale, and that diversity of thought elicits stronger reasoning in debating LLMs.
arXiv Detail & Related papers (2024-10-10T21:59:01Z) - CulturalTeaming: AI-Assisted Interactive Red-Teaming for Challenging LLMs' (Lack of) Multicultural Knowledge [69.82940934994333]
We introduce CulturalTeaming, an interactive red-teaming system that leverages human-AI collaboration to build challenging evaluation dataset.
Our study reveals that CulturalTeaming's various modes of AI assistance support annotators in creating cultural questions.
CULTURALBENCH-V0.1 is a compact yet high-quality evaluation dataset with users' red-teaming attempts.
arXiv Detail & Related papers (2024-04-10T00:25:09Z) - Rethinking the Bounds of LLM Reasoning: Are Multi-Agent Discussions the
Key? [84.36332588191623]
We propose a novel group discussion framework to enrich the set of discussion mechanisms.
We observe that the multi-agent discussion performs better than a single agent only when there is no demonstration in the prompt.
arXiv Detail & Related papers (2024-02-28T12:04:05Z) - Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate [85.3444184685235]
We propose a Multi-Agent Debate (MAD) framework, in which multiple agents express their arguments in the state of "tit for tat" and a judge manages the debate process to obtain a final solution.
Our framework encourages divergent thinking in LLMs which would be helpful for tasks that require deep levels of contemplation.
arXiv Detail & Related papers (2023-05-30T15:25:45Z) - Examining Inter-Consistency of Large Language Models Collaboration: An
In-depth Analysis via Debate [41.949869545423375]
Large Language Models (LLMs) have shown impressive capabilities in various applications, but they still face various inconsistency issues.
To examine whether LLMs can collaborate effectively to achieve a consensus for a shared goal, we focus on commonsense reasoning.
Our work contributes to understanding the inter-consistency among LLMs and lays the foundation for developing future collaboration methods.
arXiv Detail & Related papers (2023-05-19T11:15:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.