Generative Confidants: How do People Experience Trust in Emotional Support from Generative AI?
- URL: http://arxiv.org/abs/2601.16656v1
- Date: Fri, 23 Jan 2026 11:19:41 GMT
- Title: Generative Confidants: How do People Experience Trust in Emotional Support from Generative AI?
- Authors: Riccardo Volpato, Simone Stumpf, Lisa DeBruine,
- Abstract summary: People are increasingly turning to generative AI (e.g., ChatGPT, Gemini, Copilot) for emotional support and companionship.<n>While trust is likely to play a central role in enabling these informal and unsupervised interactions, we still lack an understanding of how people develop and experience it in this context.<n>We conducted a qualitative study consisting of diary entries about interactions, transcripts of chats with AI, and in-depth interviews.<n>Our results suggest important novel drivers of trust in this context: familiarity emerging from personalisation, nuanced mental models of generative AI, and awareness of people's control over conversations
- Score: 4.442753300582291
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: People are increasingly turning to generative AI (e.g., ChatGPT, Gemini, Copilot) for emotional support and companionship. While trust is likely to play a central role in enabling these informal and unsupervised interactions, we still lack an understanding of how people develop and experience it in this context. Seeking to fill this gap, we recruited 24 frequent users of generative AI for emotional support and conducted a qualitative study consisting of diary entries about interactions, transcripts of chats with AI, and in-depth interviews. Our results suggest important novel drivers of trust in this context: familiarity emerging from personalisation, nuanced mental models of generative AI, and awareness of people's control over conversations. Notably, generative AI's homogeneous use of personalised, positive, and persuasive language appears to promote some of these trust-building factors. However, this also seems to discourage other trust-related behaviours, such as remembering that generative AI is a machine trained to converse in human language. We present implications for future research that are likely to become critical as the use of generative AI for emotional support increasingly overlaps with therapeutic work.
Related papers
- AI-exhibited Personality Traits Can Shape Human Self-concept through Conversations [26.039188521284974]
We show how AI personality traits can shape users' self-concepts through human-AI conversation.<n>We provide important design implications for developing more responsible and ethical AI systems.
arXiv Detail & Related papers (2026-01-19T05:16:57Z) - Psychological Factors Influencing University Students Trust in AI-Based Learning Assistants [0.0]
This paper adopts a psychology oriented perspective to examine how university students form trust in AI based learning assistants.<n>We propose a conceptual framework that organizes psychological predictors of trust into four groups: cognitive appraisals, affective reactions, social relational factors, and contextual moderators.<n>The paper highlights that trust in AI is a psychological process shaped by individual differences and learning environments.
arXiv Detail & Related papers (2025-12-19T09:44:43Z) - Human Bias in the Face of AI: Examining Human Judgment Against Text Labeled as AI Generated [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.<n>We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust [16.140485357046707]
We develop and validated a set of 27-item semantic differential scales for affective and cognitive trust.
Our empirical findings showed how the emotional and cognitive aspects of trust interact with each other and collectively shape a person's overall trust in AI agents.
arXiv Detail & Related papers (2024-07-25T18:55:33Z) - Distributed agency in second language learning and teaching through generative AI [0.0]
ChatGPT can provide informal second language practice through chats in written or voice forms.
Instructors can use AI to build learning and assessment materials in a variety of media.
arXiv Detail & Related papers (2024-03-29T14:55:40Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Bootstrapping Developmental AIs: From Simple Competences to Intelligent
Human-Compatible AIs [0.0]
The mainstream AIs approaches are the generative and deep learning approaches with large language models (LLMs) and the manually constructed symbolic approach.
This position paper lays out the prospects, gaps, and challenges for extending the practice of developmental AIs to create resilient, intelligent, and human-compatible AIs.
arXiv Detail & Related papers (2023-08-08T21:14:21Z) - Public Perception of Generative AI on Twitter: An Empirical Study Based
on Occupation and Usage [7.18819534653348]
This paper investigates users' perceptions of generative AI using 3M posts on Twitter from January 2019 to March 2023.
We find that people across various occupations, not just IT-related ones, show a strong interest in generative AI.
After the release of ChatGPT, people's interest in AI in general has increased dramatically.
arXiv Detail & Related papers (2023-05-16T15:30:12Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.