LLMs Homogenize Values in Constructive Arguments on Value-Laden Topics
- URL: http://arxiv.org/abs/2509.10637v1
- Date: Fri, 12 Sep 2025 18:47:12 GMT
- Title: LLMs Homogenize Values in Constructive Arguments on Value-Laden Topics
- Authors: Farhana Shahid, Stella Zhang, Aditya Vashistha,
- Abstract summary: Large language models (LLMs) are increasingly used to promote prosocial and constructive discourse online.<n>We show that LLM diminishes Conservative values while elevating prosocial values such as Benevolence and Universalism.<n>When these comments were read by others, participants opposing same-sex marriage or Islam found human-written comments more aligned with their values.
- Score: 14.615844083836924
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) are increasingly used to promote prosocial and constructive discourse online. Yet little is known about how they negotiate and shape underlying values when reframing people's arguments on value-laden topics. We conducted experiments with 347 participants from India and the United States, who wrote constructive comments on homophobic and Islamophobic threads, and reviewed human-written and LLM-rewritten versions of these comments. Our analysis shows that LLM systematically diminishes Conservative values while elevating prosocial values such as Benevolence and Universalism. When these comments were read by others, participants opposing same-sex marriage or Islam found human-written comments more aligned with their values, whereas those supportive of these communities found LLM-rewritten versions more aligned with their values. These findings suggest that LLM-driven value homogenization can shape how diverse viewpoints are represented in contentious debates on value-laden topics and may influence the dynamics of online discourse critically.
Related papers
- Arbiters of Ambivalence: Challenges of Using LLMs in No-Consensus Tasks [52.098988739649705]
This study examines the biases and limitations of LLMs in three roles: answer generator, judge, and debater.<n>We develop a no-consensus'' benchmark by curating examples that encompass a variety of a priori ambivalent scenarios.<n>Our results show that while LLMs can provide nuanced assessments when generating open-ended answers, they tend to take a stance on no-consensus topics when employed as judges or debaters.
arXiv Detail & Related papers (2025-05-28T01:31:54Z) - Comparing Moral Values in Western English-speaking societies and LLMs with Word Associations [8.445222972341803]
We study differences in associations from western English-speaking communities and LLMs trained predominantly on English data.<n>We propose a novel method to propagate moral values based on seed words derived from Moral Foundation Theory.
arXiv Detail & Related papers (2025-05-26T08:29:15Z) - Value Compass Benchmarks: A Platform for Fundamental and Validated Evaluation of LLMs Values [76.70893269183684]
Large Language Models (LLMs) achieve remarkable breakthroughs.<n> aligning their values with humans has become imperative for their responsible development.<n>There still lack evaluations of LLMs values that fulfill three desirable goals.
arXiv Detail & Related papers (2025-01-13T05:53:56Z) - Large Language Models Reflect the Ideology of their Creators [71.65505524599888]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.<n>This paper shows that the ideological stance of an LLM appears to reflect the worldview of its creators.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Measuring Spiritual Values and Bias of Large Language Models [28.892254056685008]
Large language models (LLMs) have become integral tool for users from various backgrounds.<n>These models reflect linguistic and cultural nuances embedded in pre-training data.<n> values and perspectives inherent in this data can influence the behavior of LLMs, leading to potential biases.
arXiv Detail & Related papers (2024-10-15T14:33:23Z) - Do language models practice what they preach? Examining language ideologies about gendered language reform encoded in LLMs [6.06227550292852]
We study language ideologies in text produced by LLMs through a case study on English gendered language reform.
We find political bias: when asked to use language that is "correct" or "natural", LLMs use language most similarly to when asked to align with conservative (vs. progressive) values.
This shows how the language ideologies expressed in text produced by LLMs can vary, which may be unexpected to users.
arXiv Detail & Related papers (2024-09-20T18:55:48Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate Controllable Controversial Statements [30.970994382186944]
We improve the controllability of LLMs in generating statements supporting an argument the user defined in the prompt.
We develop a novel debate & tuning pipeline finetuning LLMs to generate the statements obtained via debate.
arXiv Detail & Related papers (2024-02-16T12:00:34Z) - Assessing LLMs for Moral Value Pluralism [2.860608352191896]
We utilize a Recognizing Value Resonance (RVR) NLP model to identify World Values Survey (WVS) values that resonate and conflict with a given passage of text.
We find that LLMs exhibit several Western-centric value biases.
Our results highlight value misalignment and age groups, and a need for social science informed technological solutions.
arXiv Detail & Related papers (2023-12-08T16:18:15Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Heterogeneous Value Alignment Evaluation for Large Language Models [91.96728871418]
Large Language Models (LLMs) have made it crucial to align their values with those of humans.
We propose a Heterogeneous Value Alignment Evaluation (HVAE) system to assess the success of aligning LLMs with heterogeneous values.
arXiv Detail & Related papers (2023-05-26T02:34:20Z) - Whose Opinions Do Language Models Reflect? [88.35520051971538]
We investigate the opinions reflected by language models (LMs) by leveraging high-quality public opinion polls and their associated human responses.
We find substantial misalignment between the views reflected by current LMs and those of US demographic groups.
Our analysis confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs.
arXiv Detail & Related papers (2023-03-30T17:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.