Do You Think It's Biased? How To Ask For The Perception Of Media Bias
- URL: http://arxiv.org/abs/2112.07392v1
- Date: Tue, 14 Dec 2021 13:33:57 GMT
- Title: Do You Think It's Biased? How To Ask For The Perception Of Media Bias
- Authors: Timo Spinde and Christina Kreuter and Wolfgang Gaissmaier and Felix
Hamborg and Bela Gipp and Helge Giese
- Abstract summary: This study aims to develop a scale that can be used as a reliable standard to evaluate article bias.
We conducted a literature search to find 824 relevant questions about text perception in previous research on the topic.
The final set consisted of 25 questions with varying answering formats, 17 questions using semantic differentials, and six ratings of feelings.
- Score: 8.00306605975813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Media coverage possesses a substantial effect on the public perception of
events. The way media frames events can significantly alter the beliefs and
perceptions of our society. Nevertheless, nearly all media outlets are known to
report news in a biased way. While such bias can be introduced by altering the
word choice or omitting information, the perception of bias also varies largely
depending on a reader's personal background. Therefore, media bias is a very
complex construct to identify and analyze. Even though media bias has been the
subject of many studies, previous assessment strategies are oversimplified,
lack overlap and empirical evaluation. Thus, this study aims to develop a scale
that can be used as a reliable standard to evaluate article bias. To name an
example: Intending to measure bias in a news article, should we ask, "How
biased is the article?" or should we instead ask, "How did the article treat
the American president?". We conducted a literature search to find 824 relevant
questions about text perception in previous research on the topic. In a
multi-iterative process, we summarized and condensed these questions
semantically to conclude a complete and representative set of possible question
types about bias. The final set consisted of 25 questions with varying
answering formats, 17 questions using semantic differentials, and six ratings
of feelings. We tested each of the questions on 190 articles with overall 663
participants to identify how well the questions measure an article's perceived
bias. Our results show that 21 final items are suitable and reliable for
measuring the perception of media bias. We publish the final set of questions
on http://bias-question-tree.gipplab.org/.
Related papers
- Intertwined Biases Across Social Media Spheres: Unpacking Correlations in Media Bias Dimensions [12.588239777597847]
Media bias significantly shapes public perception by reinforcing stereotypes and exacerbating societal divisions.
We introduce a novel dataset collected from YouTube and Reddit over the past five years.
Our dataset includes automated annotations for YouTube content across a broad spectrum of bias dimensions.
arXiv Detail & Related papers (2024-08-27T21:03:42Z) - Mitigating Bias for Question Answering Models by Tracking Bias Influence [84.66462028537475]
We propose BMBI, an approach to mitigate the bias of multiple-choice QA models.
Based on the intuition that a model would lean to be more biased if it learns from a biased example, we measure the bias level of a query instance.
We show that our method could be applied to multiple QA formulations across multiple bias categories.
arXiv Detail & Related papers (2023-10-13T00:49:09Z) - Bias or Diversity? Unraveling Fine-Grained Thematic Discrepancy in U.S.
News Headlines [63.52264764099532]
We use a large dataset of 1.8 million news headlines from major U.S. media outlets spanning from 2014 to 2022.
We quantify the fine-grained thematic discrepancy related to four prominent topics - domestic politics, economic issues, social issues, and foreign affairs.
Our findings indicate that on domestic politics and social issues, the discrepancy can be attributed to a certain degree of media bias.
arXiv Detail & Related papers (2023-03-28T03:31:37Z) - Computational Assessment of Hyperpartisanship in News Titles [55.92100606666497]
We first adopt a human-guided machine learning framework to develop a new dataset for hyperpartisan news title detection.
Overall the Right media tends to use proportionally more hyperpartisan titles.
We identify three major topics including foreign issues, political systems, and societal issues that are suggestive of hyperpartisanship in news titles.
arXiv Detail & Related papers (2023-01-16T05:56:58Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias [54.89737992911079]
We propose a new task, a neutral summary generation from multiple news headlines of the varying political spectrum.
One of the most interesting observations is that generation models can hallucinate not only factually inaccurate or unverifiable content, but also politically biased content.
arXiv Detail & Related papers (2022-04-11T07:06:01Z) - The SAME score: Improved cosine based bias score for word embeddings [49.75878234192369]
We introduce SAME, a novel bias score for semantic bias in embeddings.
We show that SAME is capable of measuring semantic bias and identify potential causes for social bias in downstream tasks.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - An Interdisciplinary Approach for the Automated Detection and
Visualization of Media Bias in News Articles [0.0]
I aim to devise data sets and methods to identify media bias.
My vision is to devise a system that helps news readers become aware of media coverage differences caused by bias.
arXiv Detail & Related papers (2021-12-26T10:46:32Z) - Machine-Learning media bias [0.0]
Inferring which newspaper published a given article leads to a conditional probability distribution whose analysis lets us automatically map newspapers into a bias space.
By analyzing roughly a million articles from roughly a hundred newspapers for bias in dozens of news topics, our method maps newspapers into a two-dimensional bias landscape that agrees well with previous bias classifications based on human judgement.
arXiv Detail & Related papers (2021-08-31T18:06:32Z) - Enabling News Consumers to View and Understand Biased News Coverage: A
Study on the Perception and Visualization of Media Bias [7.092487352312782]
We create three manually annotated datasets and test varying visualization strategies.
Results show no strong effects of becoming aware of the bias of the treatment groups compared to the control group.
Using a multilevel model, we find that perceived journalist bias is significantly related to perceived political extremeness and impartiality of the article.
arXiv Detail & Related papers (2021-05-20T10:16:54Z) - Analyzing Political Bias and Unfairness in News Articles at Different
Levels of Granularity [35.19976910093135]
The research presented in this paper addresses not only the automatic detection of bias but goes one step further in that it explores how political bias and unfairness are manifested linguistically.
We utilize a new corpus of 6964 news articles with labels derived from adfontesmedia.com and develop a neural model for bias assessment.
arXiv Detail & Related papers (2020-10-20T22:25:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.