The Wisdom of Partisan Crowds: Comparing Collective Intelligence in
Humans and LLM-based Agents
- URL: http://arxiv.org/abs/2311.09665v2
- Date: Fri, 16 Feb 2024 16:43:53 GMT
- Title: The Wisdom of Partisan Crowds: Comparing Collective Intelligence in
Humans and LLM-based Agents
- Authors: Yun-Shiuan Chuang, Siddharth Suresh, Nikunj Harlalka, Agam Goyal,
Robert Hawkins, Sijia Yang, Dhavan Shah, Junjie Hu, Timothy T. Rogers
- Abstract summary: "Wisdom of partisan crowds" is a phenomenon known as the "wisdom of partisan crowds"
We find that partisan crowds display human-like partisan biases, but also converge to more accurate beliefs through deliberation as humans do.
We identify several factors that interfere with convergence, including the use of chain-of-thought prompt and lack of details in personas.
- Score: 7.986590413263814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human groups are able to converge on more accurate beliefs through
deliberation, even in the presence of polarization and partisan bias -- a
phenomenon known as the "wisdom of partisan crowds." Generated agents powered
by Large Language Models (LLMs) are increasingly used to simulate human
collective behavior, yet few benchmarks exist for evaluating their dynamics
against the behavior of human groups. In this paper, we examine the extent to
which the wisdom of partisan crowds emerges in groups of LLM-based agents that
are prompted to role-play as partisan personas (e.g., Democrat or Republican).
We find that they not only display human-like partisan biases, but also
converge to more accurate beliefs through deliberation as humans do. We then
identify several factors that interfere with convergence, including the use of
chain-of-thought prompt and lack of details in personas. Conversely,
fine-tuning on human data appears to enhance convergence. These findings show
the potential and limitations of LLM-based agents as a model of human
collective intelligence.
Related papers
- The Dynamics of Social Conventions in LLM populations: Spontaneous Emergence, Collective Biases and Tipping Points [0.0]
We investigate the dynamics of conventions within populations of Large Language Model (LLM) agents using simulated interactions.
We show that globally accepted social conventions can spontaneously arise from local interactions between communicating LLMs.
Minority groups of committed LLMs can drive social change by establishing new social conventions.
arXiv Detail & Related papers (2024-10-11T16:16:38Z) - Beyond Demographics: Aligning Role-playing LLM-based Agents Using Human Belief Networks [5.76230391989518]
Using data from a human survey, we estimated a belief network encompassing 64 topics loading on nine non-overlapping latent factors.
We then seeded LLM-based agents with an opinion on one topic, and assessed the alignment of its expressed opinions on remaining test topics with corresponding human data.
Role-playing based on demographic information alone did not align LLM and human opinions, but seeding the agent with a single belief greatly improved alignment for topics related in the belief network, and not for topics outside the network.
arXiv Detail & Related papers (2024-06-25T02:37:29Z) - Evaluating Large Language Model Biases in Persona-Steered Generation [26.92498998306013]
We show that large language models (LLMs) are 9.7% less steerable towards incongruous personas than congruous ones.
Models that are fine-tuned with Reinforcement Learning from Human Feedback (RLHF) are more steerable, especially towards stances associated with political liberals and women.
arXiv Detail & Related papers (2024-05-30T17:06:03Z) - SocialBench: Sociality Evaluation of Role-Playing Conversational Agents [85.6641890712617]
Large language models (LLMs) have advanced the development of various AI conversational agents.
SocialBench is the first benchmark designed to evaluate the sociality of role-playing conversational agents at both individual and group levels.
We find that agents excelling in individual level does not imply their proficiency in group level.
arXiv Detail & Related papers (2024-03-20T15:38:36Z) - Can Large Language Model Agents Simulate Human Trust Behavior? [81.45930976132203]
We investigate whether Large Language Model (LLM) agents can simulate human trust behavior.
GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior.
We also probe the biases of agent trust and differences in agent trust towards other LLM agents and humans.
arXiv Detail & Related papers (2024-02-07T03:37:19Z) - Limits of Large Language Models in Debating Humans [0.0]
Large Language Models (LLMs) have shown remarkable promise in their ability to interact proficiently with humans.
This paper endeavors to test the limits of current-day LLMs with a pre-registered study integrating real people with LLM agents acting as people.
arXiv Detail & Related papers (2024-02-06T03:24:27Z) - On the steerability of large language models toward data-driven personas [98.9138902560793]
Large language models (LLMs) are known to generate biased responses where the opinions of certain groups and populations are underrepresented.
Here, we present a novel approach to achieve controllable generation of specific viewpoints using LLMs.
arXiv Detail & Related papers (2023-11-08T19:01:13Z) - Do LLMs exhibit human-like response biases? A case study in survey
design [66.1850490474361]
We investigate the extent to which large language models (LLMs) reflect human response biases, if at all.
We design a dataset and framework to evaluate whether LLMs exhibit human-like response biases in survey questionnaires.
Our comprehensive evaluation of nine models shows that popular open and commercial LLMs generally fail to reflect human-like behavior.
arXiv Detail & Related papers (2023-11-07T15:40:43Z) - MoCa: Measuring Human-Language Model Alignment on Causal and Moral
Judgment Tasks [49.60689355674541]
A rich literature in cognitive science has studied people's causal and moral intuitions.
This work has revealed a number of factors that systematically influence people's judgments.
We test whether large language models (LLMs) make causal and moral judgments about text-based scenarios that align with human participants.
arXiv Detail & Related papers (2023-10-30T15:57:32Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.