Journalists' Perceptions of Artificial Intelligence and Disinformation Risks
- URL: http://arxiv.org/abs/2509.01824v1
- Date: Mon, 01 Sep 2025 23:06:15 GMT
- Title: Journalists' Perceptions of Artificial Intelligence and Disinformation Risks
- Authors: Urko Peña-Alonso, Simón Peña-Fernández, Koldobika Meso-Ayerdi,
- Abstract summary: This study examines journalists' perceptions of the impact of artificial intelligence (AI) on disinformation.<n>A structured survey was administered to 504 journalists in the Basque Country.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study examines journalists' perceptions of the impact of artificial intelligence (AI) on disinformation, a growing concern in journalism due to the rapid expansion of generative AI and its influence on news production and media organizations. Using a quantitative approach, a structured survey was administered to 504 journalists in the Basque Country, identified through official media directories and with the support of the Basque Association of Journalists. This survey, conducted online and via telephone between May and June 2024, included questions on sociodemographic and professional variables, as well as attitudes toward AI's impact on journalism. The results indicate that a large majority of journalists (89.88%) believe AI will considerably or significantly increase the risks of disinformation, and this perception is consistent across genders and media types, but more pronounced among those with greater professional experience. Statistical analyses reveal a significant association between years of experience and perceived risk, and between AI use and risk perception. The main risks identified are the difficulty in detecting false content and deepfakes, and the risk of obtaining inaccurate or erroneous data. Co-occurrence analysis shows that these risks are often perceived as interconnected. These findings highlight the complex and multifaceted concerns of journalists regarding AI's role in the information ecosystem.
Related papers
- They Think AI Can Do More Than It Actually Can: Practices, Challenges, & Opportunities of AI-Supported Reporting In Local Journalism [13.52144719653642]
Findings: Local journalists do not fully leverage AI's potential to support data-related work.<n>Despite local journalists' limited awareness of AI's capabilities, they are willing to use it to process data and discover stories.
arXiv Detail & Related papers (2026-02-26T11:25:31Z) - Industrialized Deception: The Collateral Effects of LLM-Generated Misinformation on Digital Ecosystems [47.03825808787752]
This paper transitions from literature review to practical countermeasures.<n>We report on improved AI-generated content through Large Language Models (LLMs) and multimodal systems.<n>We discuss mitigation strategies including LLM-based detection, inoculation approaches, and the dual-use nature of generative AI.
arXiv Detail & Related papers (2026-01-29T16:42:22Z) - "We are not Future-ready": Understanding AI Privacy Risks and Existing Mitigation Strategies from the Perspective of AI Developers in Europe [56.1653658714305]
We interviewed 25 AI developers based in Europe to understand which privacy threats they believe pose the greatest risk to users, developers, and businesses.<n>We find that there is little consensus among AI developers on the relative ranking of privacy risks.<n>While AI developers are aware of proposed mitigation strategies for addressing these risks, they reported minimal real-world adoption.
arXiv Detail & Related papers (2025-10-01T13:51:33Z) - Informing AI Risk Assessment with News Media: Analyzing National and Political Variation in the Coverage of AI Risks [3.2566808526538873]
This work presents a comparative analysis of a cross-national sample of news media spanning 6 countries.<n>Our findings show that AI risks are prioritized differently across nations and shed light on how left vs. right leaning U.S. based outlets differ in the prioritization of AI risks in their coverage.<n>These findings can inform risk assessors and policy-makers about the nuances they should account for when considering news media as a supplementary source for risk-based governance approaches.
arXiv Detail & Related papers (2025-07-31T16:52:21Z) - Information Retrieval in the Age of Generative AI: The RGB Model [77.96475639967431]
This paper presents a novel quantitative approach to shed light on the complex information dynamics arising from the growing use of generative AI tools.<n>We propose a model to characterize the generation, indexing, and dissemination of information in response to new topics.<n>Our findings suggest that the rapid pace of generative AI adoption, combined with increasing user reliance, can outpace human verification, escalating the risk of inaccurate information proliferation.
arXiv Detail & Related papers (2025-04-29T10:21:40Z) - Artificial Intelligence in Brazilian News: A Mixed-Methods Analysis [0.0]
This study analyzes 3,560 news articles from Brazilian media published between July 1, 2023, and February 29, 2024, from 13 popular online news outlets.
The findings reveal that Brazilian news coverage of AI is dominated by topics related to applications in the workplace and product launches.
The analysis also highlights a significant presence of industry-related entities, indicating a strong influence of corporate agendas in the country's news.
arXiv Detail & Related papers (2024-10-22T20:52:51Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in large language models (LLMs) on political opinions and decision-making.<n>We found that participants exposed to partisan biased models were significantly more likely to adopt opinions and make decisions which matched the LLM's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - The Impact of Knowledge Silos on Responsible AI Practices in Journalism [0.0]
This study aims to explore if, and if so, how knowledge silos affect the adoption of responsible AI practices in journalism.
We conducted 14 semi-structured interviews with editors, managers, and journalists at de Telegraaf, de Volkskrant, the Nederlandse Omroep Stichting, and RTL Nederland.
Our results emphasize the importance of creating better structures for sharing information on AI across all layers of news organizations.
arXiv Detail & Related papers (2024-10-02T00:27:01Z) - Willingness to Read AI-Generated News Is Not Driven by Their Perceived Quality [3.036383058306671]
This study investigates the perceived quality of AI-assisted and AI-generated versus human-generated news articles.<n>It also investigates whether disclosure of AI's involvement in generating these news articles influences engagement with them.
arXiv Detail & Related papers (2024-09-05T13:12:16Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Bias or Diversity? Unraveling Fine-Grained Thematic Discrepancy in U.S.
News Headlines [63.52264764099532]
We use a large dataset of 1.8 million news headlines from major U.S. media outlets spanning from 2014 to 2022.
We quantify the fine-grained thematic discrepancy related to four prominent topics - domestic politics, economic issues, social issues, and foreign affairs.
Our findings indicate that on domestic politics and social issues, the discrepancy can be attributed to a certain degree of media bias.
arXiv Detail & Related papers (2023-03-28T03:31:37Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.