L(u)PIN: LLM-based Political Ideology Nowcasting
- URL: http://arxiv.org/abs/2405.07320v1
- Date: Sun, 12 May 2024 16:14:07 GMT
- Title: L(u)PIN: LLM-based Political Ideology Nowcasting
- Authors: Ken Kato, Annabelle Purnomo, Christopher Cochrane, Raeid Saqur,
- Abstract summary: We present a method to analyze ideological positions of individual parliamentary representatives by leveraging the latent knowledge of LLMs.
The method allows us to evaluate the stance of politicians on an axis of our choice allowing us to flexibly measure the stance of politicians in regards to a topic/controversy of our choice.
- Score: 1.124958340749622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quantitative analysis of political ideological positions is a difficult task. In the past, various literature focused on parliamentary voting data of politicians, party manifestos and parliamentary speech to estimate political disagreement and polarization in various political systems. However previous methods of quantitative political analysis suffered from a common challenge which was the amount of data available for analysis. Also previous methods frequently focused on a more general analysis of politics such as overall polarization of the parliament or party-wide political ideological positions. In this paper, we present a method to analyze ideological positions of individual parliamentary representatives by leveraging the latent knowledge of LLMs. The method allows us to evaluate the stance of politicians on an axis of our choice allowing us to flexibly measure the stance of politicians in regards to a topic/controversy of our choice. We achieve this by using a fine-tuned BERT classifier to extract the opinion-based sentences from the speeches of representatives and projecting the average BERT embeddings for each representative on a pair of reference seeds. These reference seeds are either manually chosen representatives known to have opposing views on a particular topic or they are generated sentences which where created using the GPT-4 model of OpenAI. We created the sentences by prompting the GPT-4 model to generate a speech that would come from a politician defending a particular position.
Related papers
- Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Measuring Political Bias in Large Language Models: What Is Said and How It Is Said [46.1845409187583]
We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues.
Our proposed measure looks at different political issues such as reproductive rights and climate change, at both the content (the substance of the generation) and the style (the lexical polarity) of such bias.
arXiv Detail & Related papers (2024-03-27T18:22:48Z) - Llama meets EU: Investigating the European Political Spectrum through the Lens of LLMs [18.836470390824633]
We audit Llama Chat in the context of EU politics to analyze the model's political knowledge and its ability to reason in context.
We adapt, i.e., further fine-tune, Llama Chat on speeches of individual euro-parties from debates in the European Parliament to reevaluate its political leaning.
arXiv Detail & Related papers (2024-03-20T13:42:57Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Modelling Political Coalition Negotiations Using LLM-based Agents [53.934372246390495]
We introduce coalition negotiations as a novel NLP task, and model it as a negotiation between large language model-based agents.
We introduce a multilingual dataset, POLCA, comprising manifestos of European political parties and coalition agreements over a number of elections in these countries.
We propose a hierarchical Markov decision process designed to simulate the process of coalition negotiation between political parties and predict the outcomes.
arXiv Detail & Related papers (2024-02-18T21:28:06Z) - Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies [5.958974943807783]
This study addresses the challenge of understanding political bias in digitized discourse using Large Language Models (LLMs)
We present a comprehensive analytical framework, consisting of Partisan Bias Divergence Assessment and Partisan Class Tendency Prediction.
Our findings reveal the model's effectiveness in capturing emotional and moral nuances, albeit with some challenges in stance detection.
arXiv Detail & Related papers (2023-11-16T08:57:53Z) - Multilingual estimation of political-party positioning: From label
aggregation to long-input Transformers [3.651047982634467]
We implement and compare two approaches to automatic scaling analysis of political-party manifestos.
We find that the task can be efficiently solved by state-of-the-art models, with label aggregation producing the best results.
arXiv Detail & Related papers (2023-10-19T08:34:48Z) - Examining Political Rhetoric with Epistemic Stance Detection [13.829628375546568]
We develop a simple RoBERTa-based model for multi-source stance predictions that outperforms more complex state-of-the-art modeling.
We demonstrate its novel application to political science by conducting a large-scale analysis of the Mass Market Manifestos corpus of U.S. political opinion books.
arXiv Detail & Related papers (2022-12-29T23:47:14Z) - Multi-aspect Multilingual and Cross-lingual Parliamentary Speech
Analysis [1.759288298635146]
We apply advanced NLP methods to a joint and comparative analysis of six national parliaments between 2017 and 2020.
We analyze emotions and sentiment in the transcripts from the ParlaMint dataset collection.
The results show some commonalities and many surprising differences among the analyzed countries.
arXiv Detail & Related papers (2022-07-03T14:31:32Z) - Political Posters Identification with Appearance-Text Fusion [49.55696202606098]
We propose a method that efficiently utilizes appearance features and text vectors to accurately classify political posters.
The majority of this work focuses on political posters that are designed to serve as a promotion of a certain political event.
arXiv Detail & Related papers (2020-12-19T16:14:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.