Full Disclosure, Less Trust? How the Level of Detail about AI Use in News Writing Affects Readers' Trust
- URL: http://arxiv.org/abs/2601.09620v1
- Date: Wed, 14 Jan 2026 16:45:45 GMT
- Title: Full Disclosure, Less Trust? How the Level of Detail about AI Use in News Writing Affects Readers' Trust
- Authors: Pooja Prajod, Hannes Cools, Thomas Röggla, Karthikeya Puttur Venkatraj, Amber Kusters, Alia ElKattan, Pablo Cesar, Abdallah El Ali,
- Abstract summary: Findings show that not all AI disclosures lead to a transparency dilemma, but instead reflect a trade-off between readers' desire for more transparency and their trust in AI-assisted news content.
- Score: 10.22272389430846
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As artificial intelligence (AI) is increasingly integrated into news production, calls for transparency about the use of AI have gained considerable traction. Recent studies suggest that AI disclosures can lead to a ``transparency dilemma'', where disclosure reduces readers' trust. However, little is known about how the \textit{level of detail} in AI disclosures influences trust and contributes to this dilemma within the news context. In this 3$\times$2$\times$2 mixed factorial study with 40 participants, we investigate how three levels of AI disclosures (none, one-line, detailed) across two types of news (politics and lifestyle) and two levels of AI involvement (low and high) affect news readers' trust. We measured trust using the News Media Trust questionnaire, along with two decision behaviors: source-checking and subscription decisions. Questionnaire responses and subscription rates showed a decline in trust only for detailed AI disclosures, whereas source-checking behavior increased for both one-line and detailed disclosures, with the effect being more pronounced for detailed disclosures. Insights from semi-structured interviews suggest that source-checking behavior was primarily driven by interest in the topic, followed by trust, whereas trust was the main factor influencing subscription decisions. Around two-thirds of participants expressed a preference for detailed disclosures, while most participants who preferred one-line indicated a need for detail-on-demand disclosure formats. Our findings show that not all AI disclosures lead to a transparency dilemma, but instead reflect a trade-off between readers' desire for more transparency and their trust in AI-assisted news content.
Related papers
- How to Disclose? Strategic AI Disclosure in Crowdfunding [10.090562206470329]
We find that mandatory AI disclosure significantly reduces crowdfunding performance.<n>Funds raised decline by 39.8% and backer counts by 23.9% for AI-involved projects.<n>This adverse effect is systematically moderated by disclosure strategy.
arXiv Detail & Related papers (2026-02-17T16:26:03Z) - Hide or Highlight: Understanding the Impact of Factuality Expression on User Trust [1.2478643689100954]
We tested four different ways of disclosing an AI-generated output with factuality assessments.<n>We found that the opaque and ambiguity strategies led to higher trust while maintaining perceived answer quality.
arXiv Detail & Related papers (2025-08-09T20:45:21Z) - Willingness to Read AI-Generated News Is Not Driven by Their Perceived Quality [3.036383058306671]
This study investigates the perceived quality of AI-assisted and AI-generated versus human-generated news articles.<n>It also investigates whether disclosure of AI's involvement in generating these news articles influences engagement with them.
arXiv Detail & Related papers (2024-09-05T13:12:16Z) - Detect, Investigate, Judge and Determine: A Knowledge-guided Framework for Few-shot Fake News Detection [53.41813030290324]
Few-Shot Fake News Detection (FS-FND) aims to distinguish inaccurate news from real ones in extremely low-resource scenarios.<n>This task has garnered increased attention due to the widespread dissemination and harmful impact of fake news on social media.<n>We propose a Dual-perspective Knowledge-guided Fake News Detection (DKFND) model, designed to enhance LLMs from both inside and outside perspectives.
arXiv Detail & Related papers (2024-07-12T03:15:01Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - A Structured Analysis of Journalistic Evaluations for News Source
Reliability [0.456877715768796]
We evaluate two procedures for assessing the risk of online media exposing their readers to m/disinformation.
The result of our analysis shows a good degree of agreement, which in our opinion has a double value.
arXiv Detail & Related papers (2022-05-05T16:16:03Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.