Llama meets EU: Investigating the European Political Spectrum through the Lens of LLMs
- URL: http://arxiv.org/abs/2403.13592v2
- Date: Fri, 22 Mar 2024 13:37:28 GMT
- Title: Llama meets EU: Investigating the European Political Spectrum through the Lens of LLMs
- Authors: Ilias Chalkidis, Stephanie Brandl,
- Abstract summary: We audit Llama Chat in the context of EU politics to analyze the model's political knowledge and its ability to reason in context.
We adapt, i.e., further fine-tune, Llama Chat on speeches of individual euro-parties from debates in the European Parliament to reevaluate its political leaning.
- Score: 18.836470390824633
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Instruction-finetuned Large Language Models inherit clear political leanings that have been shown to influence downstream task performance. We expand this line of research beyond the two-party system in the US and audit Llama Chat in the context of EU politics in various settings to analyze the model's political knowledge and its ability to reason in context. We adapt, i.e., further fine-tune, Llama Chat on speeches of individual euro-parties from debates in the European Parliament to reevaluate its political leaning based on the EUandI questionnaire. Llama Chat shows considerable knowledge of national parties' positions and is capable of reasoning in context. The adapted, party-specific, models are substantially re-aligned towards respective positions which we see as a starting point for using chat-based LLMs as data-driven conversational engines to assist research in political science.
Related papers
- Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - SpeakGer: A meta-data enriched speech corpus of German state and federal parliaments [0.12277343096128711]
We provide the SpeakGer data set, consisting of German parliament debates from all 16 federal states of Germany as well as the German Bundestag from 1947-2023.
This data set includes rich meta data in form of information on both reactions from the audience towards the speech as well as information about the speaker's party, their age, their constituency and their party's political alignment.
arXiv Detail & Related papers (2024-10-23T14:00:48Z) - LLM-POTUS Score: A Framework of Analyzing Presidential Debates with Large Language Models [33.251235538905895]
This paper introduces a novel approach to evaluating presidential debate performances using large language models.
We propose a framework that analyzes candidates' "Policies, Persona, and Perspective" (3P) and how they resonate with the "Interests, Ideologies, and Identity" (3I) of four key audience groups.
Our method employs large language models to generate the LLM-POTUS Score, a quantitative measure of debate performance.
arXiv Detail & Related papers (2024-09-12T15:40:45Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - L(u)PIN: LLM-based Political Ideology Nowcasting [1.124958340749622]
We present a method to analyze ideological positions of individual parliamentary representatives by leveraging the latent knowledge of LLMs.
The method allows us to evaluate the stance of politicians on an axis of our choice allowing us to flexibly measure the stance of politicians in regards to a topic/controversy of our choice.
arXiv Detail & Related papers (2024-05-12T16:14:07Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Modelling Political Coalition Negotiations Using LLM-based Agents [53.934372246390495]
We introduce coalition negotiations as a novel NLP task, and model it as a negotiation between large language model-based agents.
We introduce a multilingual dataset, POLCA, comprising manifestos of European political parties and coalition agreements over a number of elections in these countries.
We propose a hierarchical Markov decision process designed to simulate the process of coalition negotiation between political parties and predict the outcomes.
arXiv Detail & Related papers (2024-02-18T21:28:06Z) - Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies [5.958974943807783]
This study addresses the challenge of understanding political bias in digitized discourse using Large Language Models (LLMs)
We present a comprehensive analytical framework, consisting of Partisan Bias Divergence Assessment and Partisan Class Tendency Prediction.
Our findings reveal the model's effectiveness in capturing emotional and moral nuances, albeit with some challenges in stance detection.
arXiv Detail & Related papers (2023-11-16T08:57:53Z) - Multi-EuP: The Multilingual European Parliament Dataset for Analysis of
Bias in Information Retrieval [62.82448161570428]
This dataset is designed to investigate fairness in a multilingual information retrieval context.
It boasts an authentic multilingual corpus, featuring topics translated into all 24 languages.
It offers rich demographic information associated with its documents, facilitating the study of demographic bias.
arXiv Detail & Related papers (2023-11-03T12:29:11Z) - PAR: Political Actor Representation Learning with Social Context and
Expert Knowledge [45.215862050840116]
We propose textbfPAR, a textbfPolitical textbfActor textbfRepresentation learning framework.
We retrieve and extract factual statements about legislators to leverage social context information.
We then construct a heterogeneous information network to incorporate social context and use relational graph neural networks to learn legislator representations.
arXiv Detail & Related papers (2022-10-15T19:28:06Z) - Multi-aspect Multilingual and Cross-lingual Parliamentary Speech
Analysis [1.759288298635146]
We apply advanced NLP methods to a joint and comparative analysis of six national parliaments between 2017 and 2020.
We analyze emotions and sentiment in the transcripts from the ParlaMint dataset collection.
The results show some commonalities and many surprising differences among the analyzed countries.
arXiv Detail & Related papers (2022-07-03T14:31:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.