Filter bubbles and affective polarization in user-personalized large
language model outputs
- URL: http://arxiv.org/abs/2311.14677v1
- Date: Tue, 31 Oct 2023 18:19:28 GMT
- Title: Filter bubbles and affective polarization in user-personalized large
language model outputs
- Authors: Tomo Lazovich
- Abstract summary: Large language models (LLMs) have led to a push for increased personalization of model outputs to individual users.
We explore how prompting a leading large language model, ChatGPT-3.5, with a user's political affiliation prior to asking factual questions leads to differing results.
- Score: 0.15540058359482856
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Echoing the history of search engines and social media content rankings, the
advent of large language models (LLMs) has led to a push for increased
personalization of model outputs to individual users. In the past, personalized
recommendations and ranking systems have been linked to the development of
filter bubbles (serving content that may confirm a user's existing biases) and
affective polarization (strong negative sentiment towards those with differing
views). In this work, we explore how prompting a leading large language model,
ChatGPT-3.5, with a user's political affiliation prior to asking factual
questions about public figures and organizations leads to differing results. We
observe that left-leaning users tend to receive more positive statements about
left-leaning political figures and media outlets, while right-leaning users see
more positive statements about right-leaning entities. This pattern holds
across presidential candidates, members of the U.S. Senate, and media
organizations with ratings from AllSides. When qualitatively evaluating some of
these outputs, there is evidence that particular facts are included or excluded
based on the user's political affiliation. These results illustrate that
personalizing LLMs based on user demographics carry the same risks of affective
polarization and filter bubbles that have been seen in other personalized
internet technologies. This ``failure mode" should be monitored closely as
there are more attempts to monetize and personalize these models.
Related papers
- Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - On the steerability of large language models toward data-driven personas [98.9138902560793]
Large language models (LLMs) are known to generate biased responses where the opinions of certain groups and populations are underrepresented.
Here, we present a novel approach to achieve controllable generation of specific viewpoints using LLMs.
arXiv Detail & Related papers (2023-11-08T19:01:13Z) - Dynamics of Ideological Biases of Social Media Users [0.0]
We show that the evolution of online platform-wide opinion groups is driven by the desire to hold popular opinions.
We focus on two social media: Twitter and Parler, on which we tracked the political biases of their users.
arXiv Detail & Related papers (2023-09-27T19:39:07Z) - Whose Opinions Do Language Models Reflect? [88.35520051971538]
We investigate the opinions reflected by language models (LMs) by leveraging high-quality public opinion polls and their associated human responses.
We find substantial misalignment between the views reflected by current LMs and those of US demographic groups.
Our analysis confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs.
arXiv Detail & Related papers (2023-03-30T17:17:08Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Political Polarization in Online News Consumption [14.276551496332154]
Political polarization appears to be on the rise, as measured by voting behavior.
Research over the years has focused on the role of the Web as a driver of polarization.
We show that online news consumption follows a polarized pattern, where users' visits to news sources aligned with their own political leaning are substantially longer than their visits to other news sources.
arXiv Detail & Related papers (2021-04-09T22:35:46Z) - Exploring Polarization of Users Behavior on Twitter During the 2019
South American Protests [15.065938163384235]
We explore polarization on Twitter in a different context, namely the protest that paralyzed several countries in the South American region in 2019.
By leveraging users' endorsement of politicians' tweets and hashtag campaigns with defined stances towards the protest (for or against), we construct a weakly labeled stance dataset with millions of users.
We find empirical evidence of the "filter bubble" phenomenon during the event, as we not only show that the user bases are homogeneous in terms of stance, but the probability that a user transitions from media of different clusters is low.
arXiv Detail & Related papers (2021-04-05T07:13:18Z) - Right and left, partisanship predicts (asymmetric) vulnerability to
misinformation [71.46564239895892]
We analyze the relationship between partisanship, echo chambers, and vulnerability to online misinformation by studying news sharing behavior on Twitter.
We find that vulnerability to misinformation is most strongly influenced by partisanship for both left- and right-leaning users.
arXiv Detail & Related papers (2020-10-04T01:36:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.