The Table of Media Bias Elements: A sentence-level taxonomy of media bias types and propaganda techniques
- URL: http://arxiv.org/abs/2601.05358v1
- Date: Thu, 08 Jan 2026 20:18:55 GMT
- Title: The Table of Media Bias Elements: A sentence-level taxonomy of media bias types and propaganda techniques
- Authors: Tim Menzner, Jochen L. Leidner,
- Abstract summary: We aim to shift the focus from where an outlet allegedly stands to how partiality is expressed in individual sentences.<n>We iteratively combine close-reading, interdisciplinary theory and pilot annotation to derive a fine-grained, sentence-level taxonomy of media bias and propaganda.<n>The result is a two-tier schema comprising 38 elementary bias types, arranged in six functional families and visualised as a "table of media-bias elements"
- Score: 0.5524804393257919
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Public debates about "left-" or "right-wing" news overlook the fact that bias is usually conveyed by concrete linguistic manoeuvres that transcend any single political spectrum. We therefore shift the focus from where an outlet allegedly stands to how partiality is expressed in individual sentences. Drawing on 26,464 sentences collected from newsroom corpora, user submissions and our own browsing, we iteratively combine close-reading, interdisciplinary theory and pilot annotation to derive a fine-grained, sentence-level taxonomy of media bias and propaganda. The result is a two-tier schema comprising 38 elementary bias types, arranged in six functional families and visualised as a "table of media-bias elements". For each type we supply a definition, real-world examples, cognitive and societal drivers, and guidance for recognition. A quantitative survey of a random 155-sentence sample illustrates prevalence differences, while a cross-walk to the best-known NLP and communication-science taxonomies reveals substantial coverage gains and reduced ambiguity.
Related papers
- Tracing Partisan Bias to Its Emotional Fingerprints: A Computational Approach to Mitigation [15.247769531485426]
This study introduces a novel framework for analysing and mitigating media bias by tracing partisan stances to their linguistic roots in emotional language.<n>We posit that partisan bias is not merely an abstract stance but materialises as quantifiable 'emotional fingerprints' within news texts.<n>Our analysis of the Allsides dataset confirms this hypothesis, revealing distinct and statistically significant emotional fingerprints for left, centre, and right-leaning media.<n>We then propose a computational approach to mitigation through NeutraSum, a model designed to neutralise these identified emotional patterns.
arXiv Detail & Related papers (2025-01-02T14:48:07Z) - Discovering and Mitigating Visual Biases through Keyword Explanation [66.71792624377069]
We propose the Bias-to-Text (B2T) framework, which interprets visual biases as keywords.
B2T can identify known biases, such as gender bias in CelebA, background bias in Waterbirds, and distribution shifts in ImageNet-R/C.
B2T uncovers novel biases in larger datasets, such as Dollar Street and ImageNet.
arXiv Detail & Related papers (2023-01-26T13:58:46Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Neural Media Bias Detection Using Distant Supervision With BABE -- Bias
Annotations By Experts [24.51774048437496]
This paper presents BABE, a robust and diverse data set for media bias research.
It consists of 3,700 sentences balanced among topics and outlets, containing media bias labels on the word and sentence level.
Based on our data, we also introduce a way to detect bias-inducing sentences in news articles automatically.
arXiv Detail & Related papers (2022-09-29T05:32:55Z) - NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias [54.89737992911079]
We propose a new task, a neutral summary generation from multiple news headlines of the varying political spectrum.
One of the most interesting observations is that generation models can hallucinate not only factually inaccurate or unverifiable content, but also politically biased content.
arXiv Detail & Related papers (2022-04-11T07:06:01Z) - The SAME score: Improved cosine based bias score for word embeddings [49.75878234192369]
We introduce SAME, a novel bias score for semantic bias in embeddings.
We show that SAME is capable of measuring semantic bias and identify potential causes for social bias in downstream tasks.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - Identification of Biased Terms in News Articles by Comparison of
Outlet-specific Word Embeddings [9.379650501033465]
We train two word embedding models, one on texts of left-wing, the other on right-wing news outlets.
Our hypothesis is that a word's representations in both word embedding spaces are more similar for non-biased words than biased words.
This paper presents the first in-depth look at the context of bias words measured by word embeddings.
arXiv Detail & Related papers (2021-12-14T13:23:49Z) - Machine-Learning media bias [0.0]
Inferring which newspaper published a given article leads to a conditional probability distribution whose analysis lets us automatically map newspapers into a bias space.
By analyzing roughly a million articles from roughly a hundred newspapers for bias in dozens of news topics, our method maps newspapers into a two-dimensional bias landscape that agrees well with previous bias classifications based on human judgement.
arXiv Detail & Related papers (2021-08-31T18:06:32Z) - Enabling News Consumers to View and Understand Biased News Coverage: A
Study on the Perception and Visualization of Media Bias [7.092487352312782]
We create three manually annotated datasets and test varying visualization strategies.
Results show no strong effects of becoming aware of the bias of the treatment groups compared to the control group.
Using a multilevel model, we find that perceived journalist bias is significantly related to perceived political extremeness and impartiality of the article.
arXiv Detail & Related papers (2021-05-20T10:16:54Z) - Detecting Media Bias in News Articles using Gaussian Bias Distributions [35.19976910093135]
We study how second-order information about biased statements in an article helps to improve detection effectiveness.
On an existing media bias dataset, we find that the frequency and positions of biased statements strongly impact article-level bias.
Using a standard model for sentence-level bias detection, we provide empirical evidence that article-level bias detectors that use second-order information clearly outperform those without.
arXiv Detail & Related papers (2020-10-20T22:20:49Z) - Towards Debiasing Sentence Representations [109.70181221796469]
We show that Sent-Debias is effective in removing biases, and at the same time, preserves performance on sentence-level downstream tasks.
We hope that our work will inspire future research on characterizing and removing social biases from widely adopted sentence representations for fairer NLP.
arXiv Detail & Related papers (2020-07-16T04:22:30Z) - Towards Controllable Biases in Language Generation [87.89632038677912]
We develop a method to induce societal biases in generated text when input prompts contain mentions of specific demographic groups.
We analyze two scenarios: 1) inducing negative biases for one demographic and positive biases for another demographic, and 2) equalizing biases between demographics.
arXiv Detail & Related papers (2020-05-01T08:25:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.