Exploiting Transformer-based Multitask Learning for the Detection of
Media Bias in News Articles
- URL: http://arxiv.org/abs/2211.03491v1
- Date: Mon, 7 Nov 2022 12:22:31 GMT
- Title: Exploiting Transformer-based Multitask Learning for the Detection of
Media Bias in News Articles
- Authors: Timo Spinde, Jan-David Krieger, Terry Ruas, Jelena Mitrovi\'c, Franz
G\"otz-Hahn, Akiko Aizawa, and Bela Gipp
- Abstract summary: We propose a Transformer-based deep learning architecture trained via Multi-Task Learning to detect media bias.
Our best-performing implementation achieves a macro $F_1$ of 0.776, a performance boost of 3% compared to our baseline, outperforming existing methods.
- Score: 21.960154864540282
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Media has a substantial impact on the public perception of events. A
one-sided or polarizing perspective on any topic is usually described as media
bias. One of the ways how bias in news articles can be introduced is by
altering word choice. Biased word choices are not always obvious, nor do they
exhibit high context-dependency. Hence, detecting bias is often difficult. We
propose a Transformer-based deep learning architecture trained via Multi-Task
Learning using six bias-related data sets to tackle the media bias detection
problem. Our best-performing implementation achieves a macro $F_{1}$ of 0.776,
a performance boost of 3\% compared to our baseline, outperforming existing
methods. Our results indicate Multi-Task Learning as a promising alternative to
improve existing baseline models in identifying slanted reporting.
Related papers
- Mapping the Media Landscape: Predicting Factual Reporting and Political Bias Through Web Interactions [0.7249731529275342]
We propose an extension to a recently presented news media reliability estimation method.
We assess the classification performance of four reinforcement learning strategies on a large news media hyperlink graph.
Our experiments, targeting two challenging bias descriptors, factual reporting and political bias, showed a significant performance improvement at the source media level.
arXiv Detail & Related papers (2024-10-23T08:18:26Z) - Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction [56.17020601803071]
Recent research shows that pre-trained language models (PLMs) suffer from "prompt bias" in factual knowledge extraction.
This paper aims to improve the reliability of existing benchmarks by thoroughly investigating and mitigating prompt bias.
arXiv Detail & Related papers (2024-03-15T02:04:35Z) - Mitigating Bias for Question Answering Models by Tracking Bias Influence [84.66462028537475]
We propose BMBI, an approach to mitigate the bias of multiple-choice QA models.
Based on the intuition that a model would lean to be more biased if it learns from a biased example, we measure the bias level of a query instance.
We show that our method could be applied to multiple QA formulations across multiple bias categories.
arXiv Detail & Related papers (2023-10-13T00:49:09Z) - Introducing MBIB -- the first Media Bias Identification Benchmark Task
and Dataset Collection [24.35462897801079]
We introduce the Media Bias Identification Benchmark (MBIB) to group different types of media bias under a common framework.
After reviewing 115 datasets, we select nine tasks and carefully propose 22 associated datasets for evaluating media bias detection techniques.
Our results suggest that while hate speech, racial bias, and gender bias are easier to detect, models struggle to handle certain bias types, e.g., cognitive and political bias.
arXiv Detail & Related papers (2023-04-25T20:49:55Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Neural Media Bias Detection Using Distant Supervision With BABE -- Bias
Annotations By Experts [24.51774048437496]
This paper presents BABE, a robust and diverse data set for media bias research.
It consists of 3,700 sentences balanced among topics and outlets, containing media bias labels on the word and sentence level.
Based on our data, we also introduce a way to detect bias-inducing sentences in news articles automatically.
arXiv Detail & Related papers (2022-09-29T05:32:55Z) - Mitigating Representation Bias in Action Recognition: Algorithms and
Benchmarks [76.35271072704384]
Deep learning models perform poorly when applied to videos with rare scenes or objects.
We tackle this problem from two different angles: algorithm and dataset.
We show that the debiased representation can generalize better when transferred to other datasets and tasks.
arXiv Detail & Related papers (2022-09-20T00:30:35Z) - A Domain-adaptive Pre-training Approach for Language Bias Detection in
News [3.7238620986236373]
We present DA-RoBERTa, a new state-of-the-art transformer-based model adapted to the media bias domain.
We also train, DA-BERT and DA-BART, two more transformer models adapted to the bias domain.
Our proposed domain-adapted models outperform prior bias detection approaches on the same data.
arXiv Detail & Related papers (2022-05-22T08:18:19Z) - NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias [54.89737992911079]
We propose a new task, a neutral summary generation from multiple news headlines of the varying political spectrum.
One of the most interesting observations is that generation models can hallucinate not only factually inaccurate or unverifiable content, but also politically biased content.
arXiv Detail & Related papers (2022-04-11T07:06:01Z) - An Interdisciplinary Approach for the Automated Detection and
Visualization of Media Bias in News Articles [0.0]
I aim to devise data sets and methods to identify media bias.
My vision is to devise a system that helps news readers become aware of media coverage differences caused by bias.
arXiv Detail & Related papers (2021-12-26T10:46:32Z) - Improving Robustness by Augmenting Training Sentences with
Predicate-Argument Structures [62.562760228942054]
Existing approaches to improve robustness against dataset biases mostly focus on changing the training objective.
We propose to augment the input sentences in the training data with their corresponding predicate-argument structures.
We show that without targeting a specific bias, our sentence augmentation improves the robustness of transformer models against multiple biases.
arXiv Detail & Related papers (2020-10-23T16:22:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.