Voting Booklet Bias: Stance Detection in Swiss Federal Communication
- URL: http://arxiv.org/abs/2306.08999v1
- Date: Thu, 15 Jun 2023 09:49:12 GMT
- Title: Voting Booklet Bias: Stance Detection in Swiss Federal Communication
- Authors: Eric Egli, Noah Mami\'e, Eyal Liron Dolev and Mathias M\"uller
- Abstract summary: We use recent stance detection methods to study the stance (for, against or neutral) of statements in official information booklets for voters.
Our main goal is to answer the fundamental question: are topics to be voted on presented in a neutral way?
Our findings have implications for the editorial process of future voting booklets and the design of better automated systems for analyzing political discourse.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we use recent stance detection methods to study the stance
(for, against or neutral) of statements in official information booklets for
voters. Our main goal is to answer the fundamental question: are topics to be
voted on presented in a neutral way?
To this end, we first train and compare several models for stance detection
on a large dataset about Swiss politics. We find that fine-tuning an M-BERT
model leads to the best accuracy. We then use our best model to analyze the
stance of utterances extracted from the Swiss federal voting booklet concerning
the Swiss popular votes of September 2022, which is the main goal of this
project.
We evaluated the models in both a multilingual as well as a monolingual
context for German, French, and Italian. Our analysis shows that some issues
are heavily favored while others are more balanced, and that the results are
largely consistent across languages.
Our findings have implications for the editorial process of future voting
booklets and the design of better automated systems for analyzing political
discourse. The data and code accompanying this paper are available at
https://github.com/ZurichNLP/voting-booklet-bias.
Related papers
- Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Aligning Large Language Models with Diverse Political Viewpoints [4.783050743764643]
Large language models such as ChatGPT exhibit striking political biases.
To overcome this, we align LLMs with diverse political viewpoints from 100,000 comments written by candidates running for national parliament in Switzerland.
Models aligned with this data can generate more accurate political viewpoints from Swiss parties, compared to commercial models such as ChatGPT.
arXiv Detail & Related papers (2024-06-20T09:53:23Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference [48.99117537559644]
We introduce Arena, an open platform for evaluating Large Language Models (LLMs) based on human preferences.
Our methodology employs a pairwise comparison approach and leverages input from a diverse user base through crowdsourcing.
This paper describes the platform, analyzes the data we have collected so far, and explains the tried-and-true statistical methods we are using.
arXiv Detail & Related papers (2024-03-07T01:22:38Z) - What Do Llamas Really Think? Revealing Preference Biases in Language
Model Representations [62.91799637259657]
Do large language models (LLMs) exhibit sociodemographic biases, even when they decline to respond?
We study this research question by probing contextualized embeddings and exploring whether this bias is encoded in its latent representations.
We propose a logistic Bradley-Terry probe which predicts word pair preferences of LLMs from the words' hidden vectors.
arXiv Detail & Related papers (2023-11-30T18:53:13Z) - Evaluating and Modeling Attribution for Cross-Lingual Question Answering [80.4807682093432]
This work is the first to study attribution for cross-lingual question answering.
We collect data in 5 languages to assess the attribution level of a state-of-the-art cross-lingual QA system.
We find that a substantial portion of the answers is not attributable to any retrieved passages.
arXiv Detail & Related papers (2023-05-23T17:57:46Z) - A Commonsense-Infused Language-Agnostic Learning Framework for Enhancing
Prediction of Political Polarity in Multilingual News Headlines [0.0]
We use the method of translation and retrieval to acquire the inferential knowledge in the target language.
We then employ an attention mechanism to emphasise important inferences.
We present a dataset of over 62.6K multilingual news headlines in five European languages annotated with their respective political polarities.
arXiv Detail & Related papers (2022-12-01T06:07:01Z) - A Spanish dataset for Targeted Sentiment Analysis of political headlines [0.0]
This work addresses the task of Targeted Sentiment Analysis for the domain of news headlines, published by the main outlets during the 2019 Argentinean Presidential Elections.
We present a polarity dataset of 1,976 headlines mentioning candidates in the 2019 elections at the target level.
Preliminary experiments with state-of-the-art classification algorithms based on pre-trained linguistic models suggest that target information is helpful for this task.
arXiv Detail & Related papers (2022-08-30T01:30:30Z) - Mundus vult decipi, ergo decipiatur: Visual Communication of Uncertainty
in Election Polls [56.8172499765118]
We discuss potential sources of bias in nowcasting and forecasting.
Concepts are presented to attenuate the issue of falsely perceived accuracy.
One key idea is the use of Probabilities of Events instead of party shares.
arXiv Detail & Related papers (2021-04-28T07:02:24Z) - SwissDial: Parallel Multidialectal Corpus of Spoken Swiss German [22.30271453485001]
We introduce the first annotated parallel corpus of spoken Swiss German across 8 major dialects, plus a Standard German reference.
Our goal has been to create and to make available a basic dataset for employing data-driven NLP applications in Swiss German.
arXiv Detail & Related papers (2021-03-21T14:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.