Longitudinal Study on Social and Emotional Use of AI Conversational Agent
- URL: http://arxiv.org/abs/2504.14112v1
- Date: Sat, 19 Apr 2025 00:03:48 GMT
- Title: Longitudinal Study on Social and Emotional Use of AI Conversational Agent
- Authors: Mohit Chandra, Javier Hernandez, Gonzalo Ramos, Mahsa Ershadi, Ananya Bhattacharjee, Judith Amores, Ebele Okoli, Ann Paradiso, Shahed Warreth, Jina Suh,
- Abstract summary: We studied the impact of four commercially available AI tools on users' perceived attachment towards AI and AI empathy.<n>Our findings underscore the importance of developing consumer-facing AI tools that support emotional well-being responsibly.
- Score: 12.951074799361994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Development in digital technologies has continuously reshaped how individuals seek and receive social and emotional support. While online platforms and communities have long served this need, the increased integration of general-purpose conversational AI into daily lives has introduced new dynamics in how support is provided and experienced. Existing research has highlighted both benefits (e.g., wider access to well-being resources) and potential risks (e.g., over-reliance) of using AI for support seeking. In this five-week, exploratory study, we recruited 149 participants divided into two usage groups: a baseline usage group (BU, n=60) that used the internet and AI as usual, and an active usage group (AU, n=89) encouraged to use one of four commercially available AI tools (Microsoft Copilot, Google Gemini, PI AI, ChatGPT) for social and emotional interactions. Our analysis revealed significant increases in perceived attachment towards AI (32.99 percentage points), perceived AI empathy (25.8 p.p.), and motivation to use AI for entertainment (22.90 p.p.) among the AU group. We also observed that individual differences (e.g., gender identity, prior AI usage) influenced perceptions of AI empathy and attachment. Lastly, the AU group expressed higher comfort in seeking personal help, managing stress, obtaining social support, and talking about health with AI, indicating potential for broader emotional support while highlighting the need for safeguards against problematic usage. Overall, our exploratory findings underscore the importance of developing consumer-facing AI tools that support emotional well-being responsibly, while empowering users to understand the limitations of these tools.
Related papers
- Linguistic Comparison of AI- and Human-Written Responses to Online Mental Health Queries [17.133059430285627]
generative AI and large language models (LLMs) have introduced new possibilities for scalable, around-the-clock mental health assistance.
In this study, we harnessed 24,114 posts and 138,758 online community (OC) responses from 55 online support communities on Reddit.
Our findings revealed that AI responses are more verbose, readable, and analytically structured, but lack linguistic diversity and personal narratives inherent in human-human interactions.
arXiv Detail & Related papers (2025-04-12T16:20:02Z) - Envisioning an AI-Enhanced Mental Health Ecosystem [1.534667887016089]
We explore various AI applications in peer support, self-help interventions, proactive monitoring, and data-driven insights.<n>We propose a hybrid ecosystem where AI assists but does not replace human providers, emphasising responsible deployment and evaluation.
arXiv Detail & Related papers (2025-03-19T04:21:38Z) - How Performance Pressure Influences AI-Assisted Decision Making [57.53469908423318]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Public Perception of Generative AI on Twitter: An Empirical Study Based
on Occupation and Usage [7.18819534653348]
This paper investigates users' perceptions of generative AI using 3M posts on Twitter from January 2019 to March 2023.
We find that people across various occupations, not just IT-related ones, show a strong interest in generative AI.
After the release of ChatGPT, people's interest in AI in general has increased dramatically.
arXiv Detail & Related papers (2023-05-16T15:30:12Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Human-AI Collaboration Enables More Empathic Conversations in Text-based
Peer-to-Peer Mental Health Support [10.743204843534512]
We develop Hailey, an AI-in-the-loop agent that provides just-in-time feedback to help participants who provide support (peer supporters) respond more empathically to those seeking help (support seekers)
We show that our Human-AI collaboration approach leads to a 19.60% increase in conversational empathy between peers overall.
We find a larger 38.88% increase in empathy within the subsample of peer supporters who self-identify as experiencing difficulty providing support.
arXiv Detail & Related papers (2022-03-28T23:37:08Z) - AI agents for facilitating social interactions and wellbeing [0.0]
We provide an overview of the mediative role of AI-augmented agents for social interactions.
We discuss opportunities and challenges of the relational approach with wellbeing AI to promote wellbeing in our societies.
arXiv Detail & Related papers (2022-02-26T04:05:23Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.