CommunityLM: Probing Partisan Worldviews from Language Models
- URL: http://arxiv.org/abs/2209.07065v1
- Date: Thu, 15 Sep 2022 05:52:29 GMT
- Title: CommunityLM: Probing Partisan Worldviews from Language Models
- Authors: Hang Jiang, Doug Beeferman, Brandon Roy, Deb Roy
- Abstract summary: We use a framework that probes community-specific responses to the same survey questions using community language models CommunityLM.
In our framework we identify committed partisan members for each community on Twitter and fine-tune LMs on the tweets authored by them.
We then assess the worldviews of the two groups using prompt-based probing of their corresponding LMs.
- Score: 11.782896991259001
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As political attitudes have diverged ideologically in the United States,
political speech has diverged lingusitically. The ever-widening polarization
between the US political parties is accelerated by an erosion of mutual
understanding between them. We aim to make these communities more
comprehensible to each other with a framework that probes community-specific
responses to the same survey questions using community language models
CommunityLM. In our framework we identify committed partisan members for each
community on Twitter and fine-tune LMs on the tweets authored by them. We then
assess the worldviews of the two groups using prompt-based probing of their
corresponding LMs, with prompts that elicit opinions about public figures and
groups surveyed by the American National Election Studies (ANES) 2020
Exploratory Testing Survey. We compare the responses generated by the LMs to
the ANES survey results, and find a level of alignment that greatly exceeds
several baseline methods. Our work aims to show that we can use community LMs
to query the worldview of any group of people given a sufficiently large sample
of their social media discussions or media diet.
Related papers
- Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Can LLMs Help Predict Elections? (Counter)Evidence from the World's Largest Democracy [3.0915192911449796]
The study of how social media affects the formation of public opinion and its influence on political results has been a popular field of inquiry.
We introduce a new method: harnessing the capabilities of Large Language Models (LLMs) to examine social media data and forecast election outcomes.
arXiv Detail & Related papers (2024-05-13T15:13:23Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Whose Emotions and Moral Sentiments Do Language Models Reflect? [5.4547979989237225]
Language models (LMs) are known to represent the perspectives of some social groups better than others.
We find significant misalignment of LMs with both ideological groups.
Even after steering the LMs towards specific ideological perspectives, the misalignment and liberal tendencies of the model persist.
arXiv Detail & Related papers (2024-02-16T22:34:53Z) - Reading Between the Tweets: Deciphering Ideological Stances of
Interconnected Mixed-Ideology Communities [5.514795777097036]
We study discussions of the 2020 U.S. election on Twitter to identify complex interacting communities.
We introduce a novel approach that harnesses message passing when finetuning language models (LMs) to probe the nuanced ideologies of these communities.
arXiv Detail & Related papers (2024-02-02T01:39:00Z) - Towards Measuring the Representation of Subjective Global Opinions in Language Models [26.999751306332165]
Large language models (LLMs) may not equitably represent diverse global perspectives on societal issues.
We develop a quantitative framework to evaluate whose opinions model-generated responses are more similar to.
We release our dataset for others to use and build on.
arXiv Detail & Related papers (2023-06-28T17:31:53Z) - Whose Opinions Do Language Models Reflect? [88.35520051971538]
We investigate the opinions reflected by language models (LMs) by leveraging high-quality public opinion polls and their associated human responses.
We find substantial misalignment between the views reflected by current LMs and those of US demographic groups.
Our analysis confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs.
arXiv Detail & Related papers (2023-03-30T17:17:08Z) - Reaching the bubble may not be enough: news media role in online
political polarization [58.720142291102135]
A way of reducing polarization would be by distributing cross-partisan news among individuals with distinct political orientations.
This study investigates whether this holds in the context of nationwide elections in Brazil and Canada.
arXiv Detail & Related papers (2021-09-18T11:34:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.