In Generative AI We (Dis)Trust? Computational Analysis of Trust and Distrust in Reddit Discussions
- URL: http://arxiv.org/abs/2510.16173v1
- Date: Fri, 17 Oct 2025 19:33:57 GMT
- Title: In Generative AI We (Dis)Trust? Computational Analysis of Trust and Distrust in Reddit Discussions
- Authors: Aria Pessianzadeh, Naima Sultana, Hildegarde Van den Bulck, David Gefen, Shahin Jabari, Rezvaneh Rezapour,
- Abstract summary: This paper presents the first computational study of Trust and Distrust in GenAI.<n>Crowd-sourced annotations of a representative sample were combined with classification models to scale analysis.<n>We find that Trust and Distrust are nearly balanced over time, with shifts around major model releases.
- Score: 1.2991144814543598
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of generative AI (GenAI) has impacted many aspects of human life. As these systems become embedded in everyday practices, understanding public trust in them also becomes essential for responsible adoption and governance. Prior work on trust in AI has largely drawn from psychology and human-computer interaction, but there is a lack of computational, large-scale, and longitudinal approaches to measuring trust and distrust in GenAI and large language models (LLMs). This paper presents the first computational study of Trust and Distrust in GenAI, using a multi-year Reddit dataset (2022--2025) spanning 39 subreddits and 197,618 posts. Crowd-sourced annotations of a representative sample were combined with classification models to scale analysis. We find that Trust and Distrust are nearly balanced over time, with shifts around major model releases. Technical performance and usability dominate as dimensions, while personal experience is the most frequent reason shaping attitudes. Distinct patterns also emerge across trustors (e.g., experts, ethicists, general users). Our results provide a methodological framework for large-scale Trust analysis and insights into evolving public perceptions of GenAI.
Related papers
- Eliciting Trustworthiness Priors of Large Language Models via Economic Games [2.2940141855172036]
We propose a novel elicitation method based on iterated in-context learning.<n>We find that GPT-4.1's trustworthiness priors closely track those observed in humans.<n>We show that variation in elicited trustworthiness can be well predicted by a stereotype-based model.
arXiv Detail & Related papers (2026-01-31T15:23:03Z) - Revisiting Trust in the Era of Generative AI: Factorial Structure and Latent Profiles [5.109743403025609]
Trust is one of the most important factors shaping whether and how people adopt and rely on artificial intelligence (AI)<n>Most existing studies measure trust in terms of functionality, focusing on whether a system is reliable, accurate, or easy to use.<n>In this study, we introduce and validate the Human-AI Trust Scale (HAITS), a new measure designed to capture both the rational and relational aspects of trust in GenAI.
arXiv Detail & Related papers (2025-10-11T12:39:53Z) - Do LLMs trust AI regulation? Emerging behaviour of game-theoretic LLM agents [61.132523071109354]
This paper investigates the interplay between AI developers, regulators and users, modelling their strategic choices under different regulatory scenarios.<n>Our research identifies emerging behaviours of strategic AI agents, which tend to adopt more "pessimistic" stances than pure game-theoretic agents.
arXiv Detail & Related papers (2025-04-11T15:41:21Z) - Human Trust in AI Search: A Large-Scale Experiment [0.07589017023705934]
generative artificial intelligence (GenAI) can influence what we buy, how we vote and our health.<n>No work establishes the causal effect of generative search designs on human trust.<n>We execute 12,000 search queries across seven countries, generating 80,000 real-time GenAI and traditional search results.
arXiv Detail & Related papers (2025-04-08T21:12:41Z) - GenAI vs. Human Fact-Checkers: Accurate Ratings, Flawed Rationales [2.3475022003300055]
GPT-4o, one of the most used AI models in consumer applications, outperforms other models, but all models exhibit only moderate agreement with human coders.<n>We also assess the effectiveness of summarized versus full content inputs, finding that summarized content holds promise for improving efficiency without sacrificing accuracy.
arXiv Detail & Related papers (2025-02-20T17:47:40Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Trust in Human-AI Interaction: Scoping Out Models, Measures, and Methods [12.641141743223377]
Trust has emerged as a key factor in people's interactions with AI-infused systems.
Little is known about what models of trust have been used and for what systems.
There is yet no known standard approach to measuring trust in AI.
arXiv Detail & Related papers (2022-04-30T07:34:19Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.