Auditing LLM Editorial Bias in News Media Exposure
- URL: http://arxiv.org/abs/2510.27489v1
- Date: Fri, 31 Oct 2025 14:07:42 GMT
- Title: Auditing LLM Editorial Bias in News Media Exposure
- Authors: Marco Minici, Cristian Consonni, Federico Cinus, Giuseppe Manco,
- Abstract summary: We compare three leading agents, GPT-4o-Mini, Claude-3.7-Sonnet, and Gemini-2.0-Flash, against Google News.<n>We find that, compared to Google News, LLMs surface significantly fewer unique outlets and allocate attention more unevenly.
- Score: 4.460107561389793
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large Language Models (LLMs) increasingly act as gateways to web content, shaping how millions of users encounter online information. Unlike traditional search engines, whose retrieval and ranking mechanisms are well studied, the selection processes of web-connected LLMs add layers of opacity to how answers are generated. By determining which news outlets users see, these systems can influence public opinion, reinforce echo chambers, and pose risks to civic discourse and public trust. This work extends two decades of research in algorithmic auditing to examine how LLMs function as news engines. We present the first audit comparing three leading agents, GPT-4o-Mini, Claude-3.7-Sonnet, and Gemini-2.0-Flash, against Google News, asking: \textit{How do LLMs differ from traditional aggregators in the diversity, ideology, and reliability of the media they expose to users?} Across 24 global topics, we find that, compared to Google News, LLMs surface significantly fewer unique outlets and allocate attention more unevenly. In the same way, GPT-4o-Mini emphasizes more factual and right-leaning sources; Claude-3.7-Sonnet favors institutional and civil-society domains and slightly amplifies right-leaning exposure; and Gemini-2.0-Flash exhibits a modest left-leaning tilt without significant changes in factuality. These patterns remain robust under prompt variations and alternative reliability benchmarks. Together, our findings show that LLMs already enact \textit{agentic editorial policies}, curating information in ways that diverge from conventional aggregators. Understanding and governing their emerging editorial power will be critical for ensuring transparency, pluralism, and trust in digital information ecosystems.
Related papers
- ZoFia: Zero-Shot Fake News Detection with Entity-Guided Retrieval and Multi-LLM Interaction [14.012874564599272]
ZoFia is a novel two-stage zero-shot fake news detection framework.<n>First, we introduce Hierarchical Salience to quantify the importance of entities in the news content.<n>We then propose the SC-MMR algorithm to effectively select an informative and diverse set of keywords.
arXiv Detail & Related papers (2025-11-03T03:29:42Z) - Artificial Intelligence and Civil Discourse: How LLMs Moderate Climate Change Conversations [2.570568710751949]
Large language models (LLMs) are increasingly integrated into online platforms and digital communication spaces.<n>This study examines how LLMs naturally moderate climate change conversations through their distinct communicative behaviors.
arXiv Detail & Related papers (2025-06-07T03:32:47Z) - How LLMs Fail to Support Fact-Checking [4.918358353535447]
Large Language Models (LLMs) can amplify online misinformation, but show promise in tackling misinformation.<n>We empirically study the capabilities of three LLMs -- ChatGPT, Gemini, and Claude -- in countering political misinformation.<n>Our findings suggest that models struggle to ground their responses in real news sources, and tend to prefer citing left-leaning sources.
arXiv Detail & Related papers (2025-02-28T07:12:03Z) - A Multi-LLM Debiasing Framework [85.17156744155915]
Large Language Models (LLMs) are powerful tools with the potential to benefit society immensely, yet, they have demonstrated biases that perpetuate societal inequalities.
Recent research has shown a growing interest in multi-LLM approaches, which have been demonstrated to be effective in improving the quality of reasoning.
We propose a novel multi-LLM debiasing framework aimed at reducing bias in LLMs.
arXiv Detail & Related papers (2024-09-20T20:24:50Z) - Detect, Investigate, Judge and Determine: A Knowledge-guided Framework for Few-shot Fake News Detection [53.41813030290324]
Few-Shot Fake News Detection (FS-FND) aims to distinguish inaccurate news from real ones in extremely low-resource scenarios.<n>This task has garnered increased attention due to the widespread dissemination and harmful impact of fake news on social media.<n>We propose a Dual-perspective Knowledge-guided Fake News Detection (DKFND) model, designed to enhance LLMs from both inside and outside perspectives.
arXiv Detail & Related papers (2024-07-12T03:15:01Z) - Seeing Through AI's Lens: Enhancing Human Skepticism Towards LLM-Generated Fake News [0.38233569758620056]
This paper aims to elucidate simple markers that help individuals distinguish between articles penned by humans and those created by LLMs.
We then devise a metric named Entropy-Shift Authorship Signature (ESAS) based on the information theory and entropy principles.
The proposed ESAS ranks terms or entities, like POS tagging, within news articles based on their relevance in discerning article authorship.
arXiv Detail & Related papers (2024-06-20T06:02:04Z) - Generative Echo Chamber? Effects of LLM-Powered Search Systems on
Diverse Information Seeking [49.02867094432589]
Large language models (LLMs) powered conversational search systems have already been used by hundreds of millions of people.
We investigate whether and how LLMs with opinion biases that either reinforce or challenge the user's view change the effect.
arXiv Detail & Related papers (2024-02-08T18:14:33Z) - Accuracy and Political Bias of News Source Credibility Ratings by Large Language Models [8.367075755850983]
This paper audits nine widely used language models (LLMs) from three leading providers to evaluate their ability to discern credible and high-quality information sources.<n>We find that larger models more frequently refuse to provide ratings due to insufficient information, whereas smaller models are more prone to making errors in their ratings.
arXiv Detail & Related papers (2023-04-01T05:04:06Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - A Structured Analysis of Journalistic Evaluations for News Source
Reliability [0.456877715768796]
We evaluate two procedures for assessing the risk of online media exposing their readers to m/disinformation.
The result of our analysis shows a good degree of agreement, which in our opinion has a double value.
arXiv Detail & Related papers (2022-05-05T16:16:03Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Political audience diversity and news reliability in algorithmic ranking [54.23273310155137]
We propose using the political diversity of a website's audience as a quality signal.
Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 U.S. citizens, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards.
arXiv Detail & Related papers (2020-07-16T02:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.