Quantifying the Impact of Large Language Models on Collective Opinion
Dynamics
- URL: http://arxiv.org/abs/2308.03313v2
- Date: Sat, 26 Aug 2023 01:53:37 GMT
- Title: Quantifying the Impact of Large Language Models on Collective Opinion
Dynamics
- Authors: Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
- Abstract summary: We create an opinion network dynamics model to encode the opinions of large language models (LLMs)
The results suggest that the output opinion of LLMs has a unique and positive effect on the collective opinion difference.
Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output.
- Score: 7.0012506428382375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs.
Related papers
- Reinforcement Learning from Multi-role Debates as Feedback for Bias Mitigation in LLMs [6.090496490133132]
We find that involving LLMs in role-playing scenario boosts their ability to recognize and mitigate biases.
We propose Reinforcement Learning from Multi-role Debates as Feedback (RLDF), a novel approach for bias mitigation replacing human feedback.
arXiv Detail & Related papers (2024-04-15T22:18:50Z) - Wait, It's All Token Noise? Always Has Been: Interpreting LLM Behavior Using Shapley Value [1.223779595809275]
Large language models (LLMs) have opened up exciting possibilities for simulating human behavior and cognitive processes.
However, the validity of utilizing LLMs as stand-ins for human subjects remains uncertain.
This paper presents a novel approach based on Shapley values to interpret LLM behavior and quantify the relative contribution of each prompt component to the model's output.
arXiv Detail & Related papers (2024-03-29T22:49:43Z) - Comprehensive Reassessment of Large-Scale Evaluation Outcomes in LLMs: A Multifaceted Statistical Approach [64.42462708687921]
Evaluations have revealed that factors such as scaling, training types, architectures and other factors profoundly impact the performance of LLMs.
Our study embarks on a thorough re-examination of these LLMs, targeting the inadequacies in current evaluation methods.
This includes the application of ANOVA, Tukey HSD tests, GAMM, and clustering technique.
arXiv Detail & Related papers (2024-03-22T14:47:35Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Exploring Value Biases: How LLMs Deviate Towards the Ideal [57.99044181599786]
Large-Language-Models (LLMs) are deployed in a wide range of applications, and their response has an increasing social impact.
We show that value bias is strong in LLMs across different categories, similar to the results found in human studies.
arXiv Detail & Related papers (2024-02-16T18:28:43Z) - Generative Echo Chamber? Effects of LLM-Powered Search Systems on
Diverse Information Seeking [49.02867094432589]
Large language models (LLMs) powered conversational search systems have already been used by hundreds of millions of people.
We investigate whether and how LLMs with opinion biases that either reinforce or challenge the user's view change the effect.
arXiv Detail & Related papers (2024-02-08T18:14:33Z) - A Group Fairness Lens for Large Language Models [34.0579082699443]
Large language models can perpetuate biases and unfairness when deployed in social media contexts.
We propose evaluating LLM biases from a group fairness lens using a novel hierarchical schema characterizing diverse social groups.
We pioneer a novel chain-of-thought method GF-Think to mitigate biases of LLMs from a group fairness perspective.
arXiv Detail & Related papers (2023-12-24T13:25:15Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Survey on Factuality in Large Language Models: Knowledge, Retrieval and
Domain-Specificity [61.54815512469125]
This survey addresses the crucial issue of factuality in Large Language Models (LLMs)
As LLMs find applications across diverse domains, the reliability and accuracy of their outputs become vital.
arXiv Detail & Related papers (2023-10-11T14:18:03Z) - A Survey on Fairness in Large Language Models [28.05516809190299]
Large Language Models (LLMs) have shown powerful performance and development prospects.
LLMs can capture social biases from unprocessed training data and propagate the biases to downstream tasks.
Unfair LLM systems have undesirable social impacts and potential harms.
arXiv Detail & Related papers (2023-08-20T03:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.