The Impact of a Chatbot's Ephemerality-Framing on Self-Disclosure Perceptions
- URL: http://arxiv.org/abs/2505.20464v1
- Date: Mon, 26 May 2025 19:00:49 GMT
- Title: The Impact of a Chatbot's Ephemerality-Framing on Self-Disclosure Perceptions
- Authors: Samuel Rhys Cox, Rune Møberg Jacobsen, Niels van Berkel,
- Abstract summary: We investigated how a chatbots's description of its relationship with users affects self-disclosure.<n>We compared Familiar and Stranger, which presented themselves as unacquainted entities in each conversation.<n>When Emotional-disclosure was sought in the first chatting session, Stranger-condition participants felt more comfortable self-disclosing.<n>But when Factual-disclosure was sought first, these differences were replaced by more enjoyment among Familiar-condition participants.
- Score: 17.836384420199316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-disclosure, the sharing of one's thoughts and feelings, is affected by the perceived relationship between individuals. While chatbots are increasingly used for self-disclosure, the impact of a chatbot's framing on users' self-disclosure remains under-explored. We investigated how a chatbot's description of its relationship with users, particularly in terms of ephemerality, affects self-disclosure. Specifically, we compared a Familiar chatbot, presenting itself as a companion remembering past interactions, with a Stranger chatbot, presenting itself as a new, unacquainted entity in each conversation. In a mixed factorial design, participants engaged with either the Familiar or Stranger chatbot in two sessions across two days, with one conversation focusing on Emotional- and another Factual-disclosure. When Emotional-disclosure was sought in the first chatting session, Stranger-condition participants felt more comfortable self-disclosing. However, when Factual-disclosure was sought first, these differences were replaced by more enjoyment among Familiar-condition participants. Qualitative findings showed Stranger afforded anonymity and reduced judgement, whereas Familiar sometimes felt intrusive unless rapport was built via low-risk Factual-disclosure.
Related papers
- Exploring User Security and Privacy Attitudes and Concerns Toward the Use of General-Purpose LLM Chatbots for Mental Health [5.3052849646510225]
Large language model (LLM)-enabled conversational agents for emotional support are increasingly being used by individuals.<n>Little empirical research measures users' privacy and security concerns, attitudes, and expectations.<n>We identify critical misconceptions and a general lack of risk awareness.<n>We propose recommendations to safeguard user mental health disclosures.
arXiv Detail & Related papers (2025-07-14T18:10:21Z) - Investigating Affective Use and Emotional Well-being on ChatGPT [32.797983866308755]
We investigate the extent to which interactions with ChatGPT may impact users' emotional well-being, behaviors and experiences.<n>We analyze over 3 million conversations for affective cues and surveying over 4,000 users on their perceptions of ChatGPT.<n>We conduct an Institutional Review Board (IRB)-approved randomized controlled trial (RCT) on close to 1,000 participants over 28 days.
arXiv Detail & Related papers (2025-04-04T19:22:10Z) - Empathetic Response in Audio-Visual Conversations Using Emotion Preference Optimization and MambaCompressor [44.499778745131046]
Our study introduces a dual approach: firstly, we employ Emotional Preference Optimization (EPO) to train chatbots.<n>This training enables the model to discern fine distinctions between correct and counter-emotional responses.<n> Secondly, we introduce MambaCompressor to effectively compress and manage extensive conversation histories.<n>Our comprehensive experiments across multiple datasets demonstrate that our model significantly outperforms existing models in generating empathetic responses and managing lengthy dialogues.
arXiv Detail & Related papers (2024-12-23T13:44:51Z) - The Illusion of Empathy: How AI Chatbots Shape Conversation Perception [10.061399479158903]
We found that GPT-based chatbots were perceived as less empathetic than human conversational partners.<n>Our findings underscore the critical role of perceived empathy in shaping conversation quality.
arXiv Detail & Related papers (2024-11-19T21:47:08Z) - Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews [23.443181324643017]
This study examines the impact of AI on human false memories.
It explores false memory induction through suggestive questioning in Human-AI interactions, simulating crime witness interviews.
arXiv Detail & Related papers (2024-08-08T04:55:03Z) - Measuring and Controlling Instruction (In)Stability in Language Model Dialogs [72.38330196290119]
System-prompting is a tool for customizing language-model chatbots, enabling them to follow a specific instruction.
We propose a benchmark to test the assumption, evaluating instruction stability via self-chats.
We reveal a significant instruction drift within eight rounds of conversations.
We propose a lightweight method called split-softmax, which compares favorably against two strong baselines.
arXiv Detail & Related papers (2024-02-13T20:10:29Z) - Neural Generation Meets Real People: Building a Social, Informative
Open-Domain Dialogue Agent [65.68144111226626]
Chirpy Cardinal aims to be both informative and conversational.
We let both the user and bot take turns driving the conversation.
Chirpy Cardinal placed second out of nine bots in the Alexa Prize Socialbot Grand Challenge.
arXiv Detail & Related papers (2022-07-25T09:57:23Z) - EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments [75.11753644302385]
Empathetic conversational agents should not only understand what is being discussed, but also acknowledge the implied feelings of the conversation partner.
We propose a method based on a transformer pretrained language model (T5)
We evaluate our model on the EmpatheticDialogues dataset using both automated metrics and human evaluation.
arXiv Detail & Related papers (2021-10-30T19:04:48Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Discovering Chatbot's Self-Disclosure's Impact on User Trust, Affinity,
and Recommendation Effectiveness [39.240553429989674]
We designed a social bot with three self-disclosure levels that conducted small talks and provided relevant recommendations to people.
372 MTurk participants were randomized to one of the four groups with different self-disclosure levels to converse with the bot on two topics, movies and COVID-19.
arXiv Detail & Related papers (2021-06-03T08:16:25Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.