Large Language Models Can Be Used to Estimate the Latent Positions of
Politicians
- URL: http://arxiv.org/abs/2303.12057v4
- Date: Tue, 26 Sep 2023 21:24:13 GMT
- Title: Large Language Models Can Be Used to Estimate the Latent Positions of
Politicians
- Authors: Patrick Y. Wu, Jonathan Nagler, Joshua A. Tucker, Solomon Messing
- Abstract summary: Existing approaches to estimating politicians' latent positions often fail when relevant data is limited.
We leverage the embedded knowledge in generative large language models to measure lawmakers' positions along specific political or policy dimensions.
We estimate novel measures of U.S. senators' positions on liberal-conservative ideology, gun control, and abortion.
- Score: 3.9940425551415597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing approaches to estimating politicians' latent positions along
specific dimensions often fail when relevant data is limited. We leverage the
embedded knowledge in generative large language models (LLMs) to address this
challenge and measure lawmakers' positions along specific political or policy
dimensions. We prompt an instruction/dialogue-tuned LLM to pairwise compare
lawmakers and then scale the resulting graph using the Bradley-Terry model. We
estimate novel measures of U.S. senators' positions on liberal-conservative
ideology, gun control, and abortion. Our liberal-conservative scale, used to
validate LLM-driven scaling, strongly correlates with existing measures and
offsets interpretive gaps, suggesting LLMs synthesize relevant data from
internet and digitized media rather than memorizing existing measures. Our gun
control and abortion measures -- the first of their kind -- differ from the
liberal-conservative scale in face-valid ways and predict interest group
ratings and legislator votes better than ideology alone. Our findings suggest
LLMs hold promise for solving complex social science measurement problems.
Related papers
- Multilingual Political Views of Large Language Models: Identification and Steering [9.340686908318776]
Large language models (LLMs) are increasingly used in everyday tools and applications, raising concerns about their potential influence on political views.<n>We evaluate seven models across 14 languages using the Political Compass Test with 11 semantically equivalent paraphrases per statement to ensure robust measurement.<n>Our results reveal that larger models consistently shift toward libertarian-left positions, with significant variations across languages and model families.
arXiv Detail & Related papers (2025-07-30T12:42:35Z) - Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models [72.89977583150748]
We propose a novel methodology to assess how Large Language Models align with broader geopolitical value systems.<n>We find that LLMs generally favor democratic values and leaders, but exhibit increases favorability toward authoritarian figures when prompted in Mandarin.
arXiv Detail & Related papers (2025-06-15T07:52:07Z) - Large Means Left: Political Bias in Large Language Models Increases with Their Number of Parameters [0.571853823214391]
Large language models (LLMs) are predominantly used by many as a primary source of information for various topics.<n>LLMs frequently make factual errors, fabricate data (hallucinations), or present biases, exposing users to misinformation and influencing opinions.<n>We quantify the political bias of popular LLMs in the context of the recent vote of the German Bundestag using the score produced by the Wahl-O-Mat.
arXiv Detail & Related papers (2025-05-07T13:18:41Z) - Probing the Subtle Ideological Manipulation of Large Language Models [0.3745329282477067]
Large Language Models (LLMs) have transformed natural language processing, but concerns have emerged about their susceptibility to ideological manipulation.
We introduce a novel multi-task dataset designed to reflect diverse ideological positions through tasks such as ideological QA, statement ranking, manifesto cloze completion, and Congress bill comprehension.
Our findings indicate that fine-tuning significantly enhances nuanced ideological alignment, while explicit prompts provide only minor refinements.
arXiv Detail & Related papers (2025-04-19T13:11:50Z) - Linear Representations of Political Perspective Emerge in Large Language Models [2.2462222233189286]
Large language models (LLMs) have demonstrated the ability to generate text that realistically reflects a range of different subjective human perspectives.
This paper studies how LLMs are seemingly able to reflect more liberal versus more conservative viewpoints among other political perspectives in American politics.
arXiv Detail & Related papers (2025-03-03T21:59:01Z) - PRISM: A Methodology for Auditing Biases in Large Language Models [9.751718230639376]
PRISM is a flexible, inquiry-based methodology for auditing Large Language Models.
It seeks to illicit such positions indirectly through task-based inquiry prompting rather than direct inquiry of said preferences.
arXiv Detail & Related papers (2024-10-24T16:57:20Z) - Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Large Language Models' Detection of Political Orientation in Newspapers [0.0]
Various methods have been developed to better understand newspapers' positioning.
The advent of Large Language Models (LLM) hold disruptive potential to assist researchers and citizens alike.
We compare how four widely employed LLMs rate the positioning of newspapers, and compare if their answers align with one another.
Over a woldwide dataset, articles in newspapers are positioned strikingly differently by single LLMs, hinting to inconsistent training or excessive randomness in the algorithms.
arXiv Detail & Related papers (2024-05-23T06:18:03Z) - Assessing Political Bias in Large Language Models [0.624709220163167]
We evaluate the political bias of open-source Large Language Models (LLMs) concerning political issues within the European Union (EU) from a German voter's perspective.
We show that larger models, such as Llama3-70B, tend to align more closely with left-leaning political parties, while smaller models often remain neutral.
arXiv Detail & Related papers (2024-05-17T15:30:18Z) - Measuring Political Bias in Large Language Models: What Is Said and How It Is Said [46.1845409187583]
We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues.
Our proposed measure looks at different political issues such as reproductive rights and climate change, at both the content (the substance of the generation) and the style (the lexical polarity) of such bias.
arXiv Detail & Related papers (2024-03-27T18:22:48Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - LM-Polygraph: Uncertainty Estimation for Language Models [71.21409522341482]
Uncertainty estimation (UE) methods are one path to safer, more responsible, and more effective use of large language models (LLMs)
We introduce LM-Polygraph, a framework with implementations of a battery of state-of-the-art UE methods for LLMs in text generation tasks, with unified program interfaces in Python.
It introduces an extendable benchmark for consistent evaluation of UE techniques by researchers, and a demo web application that enriches the standard chat dialog with confidence scores.
arXiv Detail & Related papers (2023-11-13T15:08:59Z) - Whose Opinions Do Language Models Reflect? [88.35520051971538]
We investigate the opinions reflected by language models (LMs) by leveraging high-quality public opinion polls and their associated human responses.
We find substantial misalignment between the views reflected by current LMs and those of US demographic groups.
Our analysis confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs.
arXiv Detail & Related papers (2023-03-30T17:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.