Polarization of Autonomous Generative AI Agents Under Echo Chambers
- URL: http://arxiv.org/abs/2402.12212v1
- Date: Mon, 19 Feb 2024 15:14:15 GMT
- Title: Polarization of Autonomous Generative AI Agents Under Echo Chambers
- Authors: Masaya Ohagi
- Abstract summary: An echo chamber often generates polarization, leading to conflicts caused by people with radical opinions.
We investigated the potential for polarization to occur among a group of autonomous AI agents based on generative language models.
We found that the group of agents based on ChatGPT tended to become polarized in echo chamber environments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online social networks often create echo chambers where people only hear
opinions reinforcing their beliefs. An echo chamber often generates
polarization, leading to conflicts caused by people with radical opinions, such
as the January 6, 2021, attack on the US Capitol. The echo chamber has been
viewed as a human-specific problem, but this implicit assumption is becoming
less reasonable as large language models, such as ChatGPT, acquire social
abilities. In response to this situation, we investigated the potential for
polarization to occur among a group of autonomous AI agents based on generative
language models in an echo chamber environment. We had AI agents discuss
specific topics and analyzed how the group's opinions changed as the discussion
progressed. As a result, we found that the group of agents based on ChatGPT
tended to become polarized in echo chamber environments. The analysis of
opinion transitions shows that this result is caused by ChatGPT's high prompt
understanding ability to update its opinion by considering its own and
surrounding agents' opinions. We conducted additional experiments to
investigate under what specific conditions AI agents tended to polarize. As a
result, we identified factors that strongly influence polarization, such as the
agent's persona. These factors should be monitored to prevent the polarization
of AI agents.
Related papers
- Decoding Echo Chambers: LLM-Powered Simulations Revealing Polarization in Social Networks [12.812531689189065]
Impact of social media on critical issues such as echo chambers needs to be addressed.
Traditional research often oversimplifies emotional tendencies and opinion evolution into numbers and formulas.
We propose an LLM-based simulation for the social opinion network to evaluate and counter polarization phenomena.
arXiv Detail & Related papers (2024-09-28T12:49:02Z) - Persona Inconstancy in Multi-Agent LLM Collaboration: Conformity, Confabulation, and Impersonation [16.82101507069166]
Multi-agent AI systems can be used for simulating collective decision-making in scientific and practical applications.
We examine AI agent ensembles engaged in cross-national collaboration and debate by analyzing their private responses and chat transcripts.
Our findings suggest that multi-agent discussions can support collective AI decisions that more often reflect diverse perspectives.
arXiv Detail & Related papers (2024-05-06T21:20:35Z) - Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - Reducing Opinion Echo-Chambers by Intelligent Placement of
Moderate-Minded Agents [24.712838547388895]
We show the different behavior put forward by open- and close-minded agents towards an issue, when allowed to freely intermix and communicate.
We identify certain'moderate'-minded agents, who possess the capability of manipulating and reducing the number of echo chambers.
The paper proposes an algorithm for intelligent placement of moderate-minded agents in the opinion-time spectrum by which the opinion echo chambers can be maximally reduced.
arXiv Detail & Related papers (2023-04-21T05:12:08Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers
and Hostile Intergroup Interactions on Reddit [66.09950457847242]
We study the activity of 5.97M Reddit users and 421M comments posted over 13 years.
We create a typology of relationships between political communities based on whether their users are toxic to each other.
arXiv Detail & Related papers (2022-11-25T22:17:07Z) - A Survey on Echo Chambers on Social Media: Description, Detection and
Mitigation [13.299893581687702]
Echo chambers on social media are a significant problem that can elicit a number of negative consequences.
We show the mechanisms, both algorithmic and psychological, that lead to the formation of echo chambers.
arXiv Detail & Related papers (2021-12-09T18:20:25Z) - Revealing Persona Biases in Dialogue Systems [64.96908171646808]
We present the first large-scale study on persona biases in dialogue systems.
We conduct analyses on personas of different social classes, sexual orientations, races, and genders.
In our studies of the Blender and DialoGPT dialogue systems, we show that the choice of personas can affect the degree of harms in generated responses.
arXiv Detail & Related papers (2021-04-18T05:44:41Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Towards control of opinion diversity by introducing zealots into a
polarised social group [7.9603223299524535]
We explore a method to influence or even control the diversity of opinions within a polarised social group.
We leverage the voter model in which users hold binary opinions and repeatedly update their beliefs based on others they connect with.
We inject zealots into a polarised network in order to shift the average opinion towards any target value.
arXiv Detail & Related papers (2020-06-12T15:27:30Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.