The Empty Chair: Using LLMs to Raise Missing Perspectives in Policy Deliberations
- URL: http://arxiv.org/abs/2503.13812v1
- Date: Tue, 18 Mar 2025 01:45:08 GMT
- Title: The Empty Chair: Using LLMs to Raise Missing Perspectives in Policy Deliberations
- Authors: Suyash Fulay, Deb Roy,
- Abstract summary: We develop and evaluate a tool that transcribes conversations in real-time and simulates input from relevant but absent stakeholders.<n>We deploy this tool in a 19-person student citizens' assembly on campus sustainability.<n>Participants and facilitators found that the tool sparked new discussions and surfaced valuable perspectives they had not previously considered.
- Score: 12.862709275890563
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deliberation is essential to well-functioning democracies, yet physical, economic, and social barriers often exclude certain groups, reducing representativeness and contributing to issues like group polarization. In this work, we explore the use of large language model (LLM) personas to introduce missing perspectives in policy deliberations. We develop and evaluate a tool that transcribes conversations in real-time and simulates input from relevant but absent stakeholders. We deploy this tool in a 19-person student citizens' assembly on campus sustainability. Participants and facilitators found that the tool sparked new discussions and surfaced valuable perspectives they had not previously considered. However, they also noted that AI-generated responses were sometimes overly general. They raised concerns about overreliance on AI for perspective-taking. Our findings highlight both the promise and potential risks of using LLMs to raise missing points of view in group deliberation settings.
Related papers
- Can LLMs Assist Annotators in Identifying Morality Frames? -- Case Study on Vaccination Debate on Social Media [22.976609127865732]
Large language models (LLMs) are adept at adapting new tasks through few-shot learning.<n>Our research explores LLMs' potential to assist human annotators in identifying morality frames within vaccination debates on social media.
arXiv Detail & Related papers (2025-02-04T04:10:23Z) - Examining Alignment of Large Language Models through Representative Heuristics: The Case of Political Stereotypes [20.407518082067437]
This study examines the alignment of large language models (LLMs) with human values for mitigate the domain of politics.<n>We analyze the factors that contribute to LLMs' deviations from empirical positions on political issues.<n>We find that while LLMs can mimic certain political parties' positions, they often exaggerate these positions more than human survey respondents do.
arXiv Detail & Related papers (2025-01-24T07:24:23Z) - Algorithmic Fidelity of Large Language Models in Generating Synthetic German Public Opinions: A Case Study [23.458234676060716]
This study investigates the algorithmic fidelity of large language models (LLMs)
We prompt different LLMs to generate synthetic public opinions reflective of German subpopulations by incorporating demographic features into the persona prompts.
Our results show that Llama performs better than other LLMs at representing subpopulations, particularly when there is lower opinion diversity within those groups.
arXiv Detail & Related papers (2024-12-17T18:46:32Z) - How Are LLMs Mitigating Stereotyping Harms? Learning from Search Engine Studies [0.0]
Commercial model development has focused efforts on'safety' training concerning legal liabilities at the expense of social impact evaluation.
This mimics a similar trend which we could observe for search engine autocompletion some years prior.
We present a novel evaluation task in the style of autocompletion prompts to assess stereotyping in LLMs.
arXiv Detail & Related papers (2024-07-16T14:04:35Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Large Language Models Cannot Self-Correct Reasoning Yet [78.16697476530994]
Large Language Models (LLMs) have emerged as a groundbreaking technology with their unparalleled text generation capabilities.
Concerns persist regarding the accuracy and appropriateness of their generated content.
A contemporary methodology, self-correction, has been proposed as a remedy to these issues.
arXiv Detail & Related papers (2023-10-03T04:56:12Z) - Automatically Correcting Large Language Models: Surveying the landscape
of diverse self-correction strategies [104.32199881187607]
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tasks.
A promising approach to rectify these flaws is self-correction, where the LLM itself is prompted or guided to fix problems in its own output.
This paper presents a comprehensive review of this emerging class of techniques.
arXiv Detail & Related papers (2023-08-06T18:38:52Z) - Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate [85.3444184685235]
We propose a Multi-Agent Debate (MAD) framework, in which multiple agents express their arguments in the state of "tit for tat" and a judge manages the debate process to obtain a final solution.
Our framework encourages divergent thinking in LLMs which would be helpful for tasks that require deep levels of contemplation.
arXiv Detail & Related papers (2023-05-30T15:25:45Z) - Perspectives on Large Language Models for Relevance Judgment [56.935731584323996]
Large language models (LLMs) claim that they can assist with relevance judgments.
It is not clear whether automated judgments can reliably be used in evaluations of retrieval systems.
arXiv Detail & Related papers (2023-04-13T13:08:38Z) - Whose Opinions Do Language Models Reflect? [88.35520051971538]
We investigate the opinions reflected by language models (LMs) by leveraging high-quality public opinion polls and their associated human responses.
We find substantial misalignment between the views reflected by current LMs and those of US demographic groups.
Our analysis confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs.
arXiv Detail & Related papers (2023-03-30T17:17:08Z) - Should Machine Learning Models Report to Us When They Are Clueless? [0.0]
We report that AI models extrapolate outside their range of familiar data.
Knowing whether a model has extrapolated or not is a fundamental insight that should be included in explaining AI models.
arXiv Detail & Related papers (2022-03-23T01:50:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.