Large Language Models Can Be Used to Estimate the Latent Positions of
Politicians
- URL: http://arxiv.org/abs/2303.12057v4
- Date: Tue, 26 Sep 2023 21:24:13 GMT
- Title: Large Language Models Can Be Used to Estimate the Latent Positions of
Politicians
- Authors: Patrick Y. Wu, Jonathan Nagler, Joshua A. Tucker, Solomon Messing
- Abstract summary: Existing approaches to estimating politicians' latent positions often fail when relevant data is limited.
We leverage the embedded knowledge in generative large language models to measure lawmakers' positions along specific political or policy dimensions.
We estimate novel measures of U.S. senators' positions on liberal-conservative ideology, gun control, and abortion.
- Score: 3.9940425551415597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing approaches to estimating politicians' latent positions along
specific dimensions often fail when relevant data is limited. We leverage the
embedded knowledge in generative large language models (LLMs) to address this
challenge and measure lawmakers' positions along specific political or policy
dimensions. We prompt an instruction/dialogue-tuned LLM to pairwise compare
lawmakers and then scale the resulting graph using the Bradley-Terry model. We
estimate novel measures of U.S. senators' positions on liberal-conservative
ideology, gun control, and abortion. Our liberal-conservative scale, used to
validate LLM-driven scaling, strongly correlates with existing measures and
offsets interpretive gaps, suggesting LLMs synthesize relevant data from
internet and digitized media rather than memorizing existing measures. Our gun
control and abortion measures -- the first of their kind -- differ from the
liberal-conservative scale in face-valid ways and predict interest group
ratings and legislator votes better than ideology alone. Our findings suggest
LLMs hold promise for solving complex social science measurement problems.
Related papers
- PRISM: A Methodology for Auditing Biases in Large Language Models [9.751718230639376]
PRISM is a flexible, inquiry-based methodology for auditing Large Language Models.
It seeks to illicit such positions indirectly through task-based inquiry prompting rather than direct inquiry of said preferences.
arXiv Detail & Related papers (2024-10-24T16:57:20Z) - Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Large Language Models' Detection of Political Orientation in Newspapers [0.0]
Various methods have been developed to better understand newspapers' positioning.
The advent of Large Language Models (LLM) hold disruptive potential to assist researchers and citizens alike.
We compare how four widely employed LLMs rate the positioning of newspapers, and compare if their answers align with one another.
Over a woldwide dataset, articles in newspapers are positioned strikingly differently by single LLMs, hinting to inconsistent training or excessive randomness in the algorithms.
arXiv Detail & Related papers (2024-05-23T06:18:03Z) - Assessing Political Bias in Large Language Models [0.624709220163167]
We evaluate the political bias of open-source Large Language Models (LLMs) concerning political issues within the European Union (EU) from a German voter's perspective.
We show that larger models, such as Llama3-70B, tend to align more closely with left-leaning political parties, while smaller models often remain neutral.
arXiv Detail & Related papers (2024-05-17T15:30:18Z) - Measuring Political Bias in Large Language Models: What Is Said and How It Is Said [46.1845409187583]
We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues.
Our proposed measure looks at different political issues such as reproductive rights and climate change, at both the content (the substance of the generation) and the style (the lexical polarity) of such bias.
arXiv Detail & Related papers (2024-03-27T18:22:48Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - LM-Polygraph: Uncertainty Estimation for Language Models [71.21409522341482]
Uncertainty estimation (UE) methods are one path to safer, more responsible, and more effective use of large language models (LLMs)
We introduce LM-Polygraph, a framework with implementations of a battery of state-of-the-art UE methods for LLMs in text generation tasks, with unified program interfaces in Python.
It introduces an extendable benchmark for consistent evaluation of UE techniques by researchers, and a demo web application that enriches the standard chat dialog with confidence scores.
arXiv Detail & Related papers (2023-11-13T15:08:59Z) - Whose Opinions Do Language Models Reflect? [88.35520051971538]
We investigate the opinions reflected by language models (LMs) by leveraging high-quality public opinion polls and their associated human responses.
We find substantial misalignment between the views reflected by current LMs and those of US demographic groups.
Our analysis confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs.
arXiv Detail & Related papers (2023-03-30T17:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.