Entity-Based Evaluation of Political Bias in Automatic Summarization
- URL: http://arxiv.org/abs/2305.02321v2
- Date: Thu, 19 Oct 2023 18:15:34 GMT
- Title: Entity-Based Evaluation of Political Bias in Automatic Summarization
- Authors: Karen Zhou and Chenhao Tan
- Abstract summary: We use an entity replacement method to investigate the portrayal of politicians in automatically generated summaries of news articles.
We develop an entity-based computational framework to assess the sensitivities of several extractive and abstractive summarizers to the politicians Donald Trump and Joe Biden.
- Score: 27.68439481274954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Growing literature has shown that NLP systems may encode social biases;
however, the political bias of summarization models remains relatively unknown.
In this work, we use an entity replacement method to investigate the portrayal
of politicians in automatically generated summaries of news articles. We
develop an entity-based computational framework to assess the sensitivities of
several extractive and abstractive summarizers to the politicians Donald Trump
and Joe Biden. We find consistent differences in these summaries upon entity
replacement, such as reduced emphasis of Trump's presence in the context of the
same article and a more individualistic representation of Trump with respect to
the collective US government (i.e., administration). These summary
dissimilarities are most prominent when the entity is heavily featured in the
source article. Our characterization provides a foundation for future studies
of bias in summarization and for normative discussions on the ideal qualities
of automatic summaries.
Related papers
- Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models [72.89977583150748]
We propose a novel methodology to assess how Large Language Models align with broader geopolitical value systems.<n>We find that LLMs generally favor democratic values and leaders, but exhibit increases favorability toward authoritarian figures when prompted in Mandarin.
arXiv Detail & Related papers (2025-06-15T07:52:07Z) - Geopolitical biases in LLMs: what are the "good" and the "bad" countries according to contemporary language models [52.00270888041742]
We introduce a novel dataset with neutral event descriptions and contrasting viewpoints from different countries.<n>Our findings show significant geopolitical biases, with models favoring specific national narratives.<n>Simple debiasing prompts had a limited effect on reducing these biases.
arXiv Detail & Related papers (2025-06-07T10:45:17Z) - BiasLab: Toward Explainable Political Bias Detection with Dual-Axis Annotations and Rationale Indicators [0.0]
BiasLab is a dataset of 300 political news articles annotated for perceived ideological bias.<n>Each article is labeled by crowdworkers along two independent scales, assessing sentiment toward the Democratic and Republican parties.<n>We quantify inter-annotator agreement, analyze misalignment with source-level outlet bias, and organize the resulting labels into interpretable subsets.
arXiv Detail & Related papers (2025-05-21T23:50:42Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Understanding Position Bias Effects on Fairness in Social Multi-Document Summarization [1.9950682531209158]
We investigate the effect of group ordering in input documents when summarizing tweets from three linguistic communities.
Our results suggest that position bias manifests differently in social multi-document summarization.
arXiv Detail & Related papers (2024-05-03T00:19:31Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - P^3SUM: Preserving Author's Perspective in News Summarization with Diffusion Language Models [57.571395694391654]
We find that existing approaches alter the political opinions and stances of news articles in more than 50% of summaries.
We propose P3SUM, a diffusion model-based summarization approach controlled by political perspective classifiers.
Experiments on three news summarization datasets demonstrate that P3SUM outperforms state-of-the-art summarization systems.
arXiv Detail & Related papers (2023-11-16T10:14:28Z) - Fair Abstractive Summarization of Diverse Perspectives [103.08300574459783]
A fair summary should provide a comprehensive coverage of diverse perspectives without underrepresenting certain groups.
We first formally define fairness in abstractive summarization as not underrepresenting perspectives of any groups of people.
We propose four reference-free automatic metrics by measuring the differences between target and source perspectives.
arXiv Detail & Related papers (2023-11-14T03:38:55Z) - NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias [54.89737992911079]
We propose a new task, a neutral summary generation from multiple news headlines of the varying political spectrum.
One of the most interesting observations is that generation models can hallucinate not only factually inaccurate or unverifiable content, but also politically biased content.
arXiv Detail & Related papers (2022-04-11T07:06:01Z) - Mitigating Media Bias through Neutral Article Generation [39.29914845102368]
Existing mitigation work displays articles from multiple news outlets to provide diverse news coverage, but without neutralizing the bias inherent in each of the displayed articles.
We propose a new task, a single neutralized article generation out of multiple biased articles, to facilitate more efficient access to balanced and unbiased information.
arXiv Detail & Related papers (2021-04-01T08:37:26Z) - Inflating Topic Relevance with Ideology: A Case Study of Political
Ideology Bias in Social Topic Detection Models [16.279854003220418]
We investigate the impact of political ideology biases in training data.
Our work highlights the susceptibility of large, complex models to propagating the biases from human-selected input.
As a way to mitigate the bias, we propose to learn a text representation that is invariant to political ideology while still judging topic relevance.
arXiv Detail & Related papers (2020-11-29T05:54:03Z) - Analyzing Political Bias and Unfairness in News Articles at Different
Levels of Granularity [35.19976910093135]
The research presented in this paper addresses not only the automatic detection of bias but goes one step further in that it explores how political bias and unfairness are manifested linguistically.
We utilize a new corpus of 6964 news articles with labels derived from adfontesmedia.com and develop a neural model for bias assessment.
arXiv Detail & Related papers (2020-10-20T22:25:00Z) - Multi-Fact Correction in Abstractive Text Summarization [98.27031108197944]
Span-Fact is a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection.
Our models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities in order to ensure semantic consistency w.r.t. the source text.
Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-10-06T02:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.