Trust in Disinformation Narratives: a Trust in the News Experiment
- URL: http://arxiv.org/abs/2503.11116v1
- Date: Fri, 14 Mar 2025 06:28:22 GMT
- Title: Trust in Disinformation Narratives: a Trust in the News Experiment
- Authors: Hanbyul Song, Miguel F. Santos Silva, Jaume Suau, Luis Espinosa-Anke,
- Abstract summary: The purpose of this study was to examine the extent to which people trust a set of fake news articles based on gender, climate change, and CO-19VID.<n>The online experiment participants were asked to read three fake news items and rate their level of trust on a scale from 1 (not true) to 8 (true)<n>The results show that the topic of news articles, stance, people's age, gender, and political ideologies significantly affected their levels of trust in the news, while the authorship (humans or ChatGPT) does not have a significant impact.
- Score: 8.761077076556507
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding why people trust or distrust one another, institutions, or information is a complex task that has led scholars from various fields of study to employ diverse epistemological and methodological approaches. Despite the challenges, it is generally agreed that the antecedents of trust (and distrust) encompass a multitude of emotional and cognitive factors, including a general disposition to trust and an assessment of trustworthiness factors. In an era marked by increasing political polarization, cultural backlash, widespread disinformation and fake news, and the use of AI software to produce news content, the need to study trust in the news has gained significant traction. This study presents the findings of a trust in the news experiment designed in collaboration with Spanish and UK journalists, fact-checkers, and the CardiffNLP Natural Language Processing research group. The purpose of this experiment, conducted in June 2023, was to examine the extent to which people trust a set of fake news articles based on previously identified disinformation narratives related to gender, climate change, and COVID-19. The online experiment participants (801 in Spain and 800 in the UK) were asked to read three fake news items and rate their level of trust on a scale from 1 (not true) to 8 (true). The pieces used a combination of factors, including stance (favourable, neutral, or against the narrative), presence of toxic expressions, clickbait titles, and sources of information to test which elements influenced people's responses the most. Half of the pieces were produced by humans and the other half by ChatGPT. The results show that the topic of news articles, stance, people's age, gender, and political ideologies significantly affected their levels of trust in the news, while the authorship (humans or ChatGPT) does not have a significant impact.
Related papers
- "I don't trust them": Exploring Perceptions of Fact-checking Entities for Flagging Online Misinformation [3.6754294738197264]
We conducted an online study with 655 US participants to explore user perceptions of eight categories of fact-checking entities across two misinformation topics.
Our results hint at the need for further exploring fact-checking entities that may be perceived as neutral, as well as the potential for incorporating multiple assessments in such labels.
arXiv Detail & Related papers (2024-10-01T17:01:09Z) - News and Misinformation Consumption in Europe: A Longitudinal
Cross-Country Perspective [49.1574468325115]
This study investigated information consumption in four European countries.
It analyzed three years of Twitter activity from news outlet accounts in France, Germany, Italy, and the UK.
Results indicate that reliable sources dominate the information landscape, although unreliable content is still present across all countries.
arXiv Detail & Related papers (2023-11-09T16:22:10Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - FakeNewsLab: Experimental Study on Biases and Pitfalls Preventing us
from Distinguishing True from False News [0.2741266294612776]
This work highlights a series of pitfalls that can influence human annotators when building false news datasets.
It also challenges the common rationale of AI that suggest users to read the full article before re-sharing.
arXiv Detail & Related papers (2021-10-22T12:02:16Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Stance Detection with BERT Embeddings for Credibility Analysis of
Information on Social Media [1.7616042687330642]
We propose a model for detecting fake news using stance as one of the features along with the content of the article.
Our work interprets the content with automatic feature extraction and the relevance of the text pieces.
The experiment conducted on the real-world dataset indicates that our model outperforms the previous work and enables fake news detection with an accuracy of 95.32%.
arXiv Detail & Related papers (2021-05-21T10:46:43Z) - Misinfo Belief Frames: A Case Study on Covid & Climate News [49.979419711713795]
We propose a formalism for understanding how readers perceive the reliability of news and the impact of misinformation.
We introduce the Misinfo Belief Frames (MBF) corpus, a dataset of 66k inferences over 23.5k headlines.
Our results using large-scale language modeling to predict misinformation frames show that machine-generated inferences can influence readers' trust in news headlines.
arXiv Detail & Related papers (2021-04-18T09:50:11Z) - A Survey on Predicting the Factuality and the Bias of News Media [29.032850263311342]
"The state of the art on media profiling for factuality and bias"
"Political bias detection, which in the Western political landscape is about predicting left-center-right bias"
"Recent advances in using different information sources and modalities"
arXiv Detail & Related papers (2021-03-16T11:11:54Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.