You're Not Gonna Believe This: A Computational Analysis of Factual Appeals and Sourcing in Partisan News
- URL: http://arxiv.org/abs/2510.10658v1
- Date: Sun, 12 Oct 2025 15:30:39 GMT
- Title: You're Not Gonna Believe This: A Computational Analysis of Factual Appeals and Sourcing in Partisan News
- Authors: Guy Mor-Lan, Tamir Sheafer, Shaul R. Shenhav,
- Abstract summary: This paper analyzes the strategies behind factual reporting through a large-scale comparison of CNN and Fox News.<n>We find that CNN's reporting contains more factual statements and is more likely to ground them in external sources.<n>The outlets also exhibit sharply divergent sourcing patterns.
- Score: 1.6822770693792826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While media bias is widely studied, the epistemic strategies behind factual reporting remain computationally underexplored. This paper analyzes these strategies through a large-scale comparison of CNN and Fox News. To isolate reporting style from topic selection, we employ an article matching strategy to compare reports on the same events and apply the FactAppeal framework to a corpus of over 470K articles covering two highly politicized periods: the COVID-19 pandemic and the Israel-Hamas war. We find that CNN's reporting contains more factual statements and is more likely to ground them in external sources. The outlets also exhibit sharply divergent sourcing patterns: CNN builds credibility by citing Experts} and Expert Documents, constructing an appeal to formal authority, whereas Fox News favors News Reports and direct quotations. This work quantifies how partisan outlets use systematically different epistemic strategies to construct reality, adding a new dimension to the study of media bias.
Related papers
- Mapping the Media Landscape: Predicting Factual Reporting and Political Bias Through Web Interactions [0.7249731529275342]
We propose an extension to a recently presented news media reliability estimation method.
We assess the classification performance of four reinforcement learning strategies on a large news media hyperlink graph.
Our experiments, targeting two challenging bias descriptors, factual reporting and political bias, showed a significant performance improvement at the source media level.
arXiv Detail & Related papers (2024-10-23T08:18:26Z) - DocNet: Semantic Structure in Inductive Bias Detection Models [0.4779196219827508]
We present DocNet, a novel, inductive, and low-resource document embedding and political bias detection model.<n>We demonstrate that the semantic structure of news articles from opposing political sides, as represented in document-level graph embeddings, have significant similarities.
arXiv Detail & Related papers (2024-06-16T14:51:12Z) - Tracking the Newsworthiness of Public Documents [107.12303391111014]
This work focuses on news coverage of local public policy in the San Francisco Bay Area by the San Francisco Chronicle.
First, we gather news articles, public policy documents and meeting recordings and link them using probabilistic relational modeling.
Second, we define a new task: newsworthiness prediction, to predict if a policy item will get covered.
arXiv Detail & Related papers (2023-11-16T10:05:26Z) - All Things Considered: Detecting Partisan Events from News Media with
Cross-Article Comparison [19.328425822355378]
We develop a latent variable-based framework to predict the ideology of news articles.
Our results reveal the high-level form of media bias, which is present even among mainstream media with strong norms of objectivity and nonpartisanship.
arXiv Detail & Related papers (2023-10-28T21:53:23Z) - Towards Corpus-Scale Discovery of Selection Biases in News Coverage:
Comparing What Sources Say About Entities as a Start [65.28355014154549]
This paper investigates the challenges of building scalable NLP systems for discovering patterns of media selection biases directly from news content in massive-scale news corpora.
We show the capabilities of the framework through a case study on NELA-2020, a corpus of 1.8M news articles in English from 519 news sources worldwide.
arXiv Detail & Related papers (2023-04-06T23:36:45Z) - Bias or Diversity? Unraveling Fine-Grained Thematic Discrepancy in U.S.
News Headlines [63.52264764099532]
We use a large dataset of 1.8 million news headlines from major U.S. media outlets spanning from 2014 to 2022.
We quantify the fine-grained thematic discrepancy related to four prominent topics - domestic politics, economic issues, social issues, and foreign affairs.
Our findings indicate that on domestic politics and social issues, the discrepancy can be attributed to a certain degree of media bias.
arXiv Detail & Related papers (2023-03-28T03:31:37Z) - Predicting Sentence-Level Factuality of News and Bias of Media Outlets [10.925648034990306]
This paper introduces a large sentence-level dataset, titled "FactNews", composed of 6,191 sentences expertly annotated according to factuality and media bias definitions proposed by AllSides.
We use FactNews to assess the overall reliability of news sources, by formulating two text classification problems for predicting sentence-level factuality of news reporting and bias of media outlets.
arXiv Detail & Related papers (2023-01-27T16:56:24Z) - Computational Assessment of Hyperpartisanship in News Titles [55.92100606666497]
We first adopt a human-guided machine learning framework to develop a new dataset for hyperpartisan news title detection.
Overall the Right media tends to use proportionally more hyperpartisan titles.
We identify three major topics including foreign issues, political systems, and societal issues that are suggestive of hyperpartisanship in news titles.
arXiv Detail & Related papers (2023-01-16T05:56:58Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias [54.89737992911079]
We propose a new task, a neutral summary generation from multiple news headlines of the varying political spectrum.
One of the most interesting observations is that generation models can hallucinate not only factually inaccurate or unverifiable content, but also politically biased content.
arXiv Detail & Related papers (2022-04-11T07:06:01Z) - Newsalyze: Enabling News Consumers to Understand Media Bias [7.652448987187803]
Knowing a news article's slant and authenticity is of crucial importance in times of "fake news"
We introduce Newsalyze, a bias-aware news reader focusing on a subtle, yet powerful form of media bias, named bias by word choice and labeling (WCL)
WCL bias can alter the assessment of entities reported in the news, e.g., "freedom fighters" vs. "terrorists"
arXiv Detail & Related papers (2021-05-20T11:20:37Z) - Analyzing Political Bias and Unfairness in News Articles at Different
Levels of Granularity [35.19976910093135]
The research presented in this paper addresses not only the automatic detection of bias but goes one step further in that it explores how political bias and unfairness are manifested linguistically.
We utilize a new corpus of 6964 news articles with labels derived from adfontesmedia.com and develop a neural model for bias assessment.
arXiv Detail & Related papers (2020-10-20T22:25:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.