Diachronic Analysis of German Parliamentary Proceedings: Ideological
Shifts through the Lens of Political Biases
- URL: http://arxiv.org/abs/2108.06295v1
- Date: Fri, 13 Aug 2021 15:58:07 GMT
- Title: Diachronic Analysis of German Parliamentary Proceedings: Ideological
Shifts through the Lens of Political Biases
- Authors: Tobias Walter, Celina Kirschner, Steffen Eger, Goran Glava\v{s}, Anne
Lauscher, Simone Paolo Ponzetto
- Abstract summary: We analyze bias in historical corpora by focusing on two specific forms of bias, namely a political (i.e., anti-communism) and racist (i.e., antisemitism)
We complement this analysis of historical biases in diachronic word embeddings with a novel measure of bias on the basis of term co-occurrences and graph-based label propagation.
The results of our bias measurements align with commonly perceived historical trends of antisemitic and anti-communist biases in German politics in different time periods.
- Score: 18.38810381745439
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We analyze bias in historical corpora as encoded in diachronic distributional
semantic models by focusing on two specific forms of bias, namely a political
(i.e., anti-communism) and racist (i.e., antisemitism) one. For this, we use a
new corpus of German parliamentary proceedings, DeuPARL, spanning the period
1867--2020. We complement this analysis of historical biases in diachronic word
embeddings with a novel measure of bias on the basis of term co-occurrences and
graph-based label propagation. The results of our bias measurements align with
commonly perceived historical trends of antisemitic and anti-communist biases
in German politics in different time periods, thus indicating the viability of
analyzing historical bias trends using semantic spaces induced from historical
corpora.
Related papers
- Uncovering Political Bias in Emotion Inference Models: Implications for sentiment analysis in social science research [0.0]
This paper investigates the presence of political bias in machine learning models used for sentiment analysis (SA) in social science research.
We conducted a bias audit on a Polish sentiment analysis model developed in our lab.
Our findings indicate that annotations by human raters propagate political biases into the model's predictions.
arXiv Detail & Related papers (2024-07-18T20:31:07Z) - Leveraging Prototypical Representations for Mitigating Social Bias without Demographic Information [50.29934517930506]
DAFair is a novel approach to address social bias in language models.
We leverage prototypical demographic texts and incorporate a regularization term during the fine-tuning process to mitigate bias.
arXiv Detail & Related papers (2024-03-14T15:58:36Z) - Cognitive bias in large language models: Cautious optimism meets
anti-Panglossian meliorism [0.0]
Traditional discussions of bias in large language models focus on a conception of bias closely tied to unfairness.
Recent work raises the novel possibility of assessing the outputs of large language models for a range of cognitive biases.
I draw out philosophical implications of this discussion for the rationality of human cognitive biases as well as the role of unrepresentative data in driving model biases.
arXiv Detail & Related papers (2023-11-18T01:58:23Z) - Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies [5.958974943807783]
This study addresses the challenge of understanding political bias in digitized discourse using Large Language Models (LLMs)
We present a comprehensive analytical framework, consisting of Partisan Bias Divergence Assessment and Partisan Class Tendency Prediction.
Our findings reveal the model's effectiveness in capturing emotional and moral nuances, albeit with some challenges in stance detection.
arXiv Detail & Related papers (2023-11-16T08:57:53Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Measuring Intersectional Biases in Historical Documents [37.03904311548859]
We investigate the continuities and transformations of bias in historical newspapers published in the Caribbean during the colonial era (18th to 19th centuries)
Our analyses are performed along the axes of gender, race, and their intersection.
We find that there is a trade-off between the stability of the word embeddings and their compatibility with the historical dataset.
arXiv Detail & Related papers (2023-05-21T07:10:31Z) - NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias [54.89737992911079]
We propose a new task, a neutral summary generation from multiple news headlines of the varying political spectrum.
One of the most interesting observations is that generation models can hallucinate not only factually inaccurate or unverifiable content, but also politically biased content.
arXiv Detail & Related papers (2022-04-11T07:06:01Z) - The SAME score: Improved cosine based bias score for word embeddings [49.75878234192369]
We introduce SAME, a novel bias score for semantic bias in embeddings.
We show that SAME is capable of measuring semantic bias and identify potential causes for social bias in downstream tasks.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Inflating Topic Relevance with Ideology: A Case Study of Political
Ideology Bias in Social Topic Detection Models [16.279854003220418]
We investigate the impact of political ideology biases in training data.
Our work highlights the susceptibility of large, complex models to propagating the biases from human-selected input.
As a way to mitigate the bias, we propose to learn a text representation that is invariant to political ideology while still judging topic relevance.
arXiv Detail & Related papers (2020-11-29T05:54:03Z) - Towards Controllable Biases in Language Generation [87.89632038677912]
We develop a method to induce societal biases in generated text when input prompts contain mentions of specific demographic groups.
We analyze two scenarios: 1) inducing negative biases for one demographic and positive biases for another demographic, and 2) equalizing biases between demographics.
arXiv Detail & Related papers (2020-05-01T08:25:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.