Artificial Empathy: AI based Mental Health
- URL: http://arxiv.org/abs/2506.00081v1
- Date: Fri, 30 May 2025 02:36:56 GMT
- Title: Artificial Empathy: AI based Mental Health
- Authors: Aditya Naik, Jovi Thomas, Teja Sree, Himavant Reddy,
- Abstract summary: Many people suffer from mental health problems but not everyone seeks professional help or has access to mental health care.<n> AI chatbots have increasingly become a go-to for individuals who either have mental disorders or simply want someone to talk to.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many people suffer from mental health problems but not everyone seeks professional help or has access to mental health care. AI chatbots have increasingly become a go-to for individuals who either have mental disorders or simply want someone to talk to. This paper presents a study on participants who have previously used chatbots and a scenario-based testing of large language model (LLM) chatbots. Our findings indicate that AI chatbots were primarily utilized as a "Five minute therapist" or as a non-judgmental companion. Participants appreciated the anonymity and lack of judgment from chatbots. However, there were concerns about privacy and the security of sensitive information. The scenario-based testing of LLM chatbots highlighted additional issues. Some chatbots were consistently reassuring, used emojis and names to add a personal touch, and were quick to suggest seeking professional help. However, there were limitations such as inconsistent tone, occasional inappropriate responses (e.g., casual or romantic), and a lack of crisis sensitivity, particularly in recognizing red flag language and escalating responses appropriately. These findings can inform both the technology and mental health care industries on how to better utilize AI chatbots to support individuals during challenging emotional periods.
Related papers
- Exploring User Security and Privacy Attitudes and Concerns Toward the Use of General-Purpose LLM Chatbots for Mental Health [5.3052849646510225]
Large language model (LLM)-enabled conversational agents for emotional support are increasingly being used by individuals.<n>Little empirical research measures users' privacy and security concerns, attitudes, and expectations.<n>We identify critical misconceptions and a general lack of risk awareness.<n>We propose recommendations to safeguard user mental health disclosures.
arXiv Detail & Related papers (2025-07-14T18:10:21Z) - Empathetic Response in Audio-Visual Conversations Using Emotion Preference Optimization and MambaCompressor [44.499778745131046]
Our study introduces a dual approach: firstly, we employ Emotional Preference Optimization (EPO) to train chatbots.<n>This training enables the model to discern fine distinctions between correct and counter-emotional responses.<n> Secondly, we introduce MambaCompressor to effectively compress and manage extensive conversation histories.<n>Our comprehensive experiments across multiple datasets demonstrate that our model significantly outperforms existing models in generating empathetic responses and managing lengthy dialogues.
arXiv Detail & Related papers (2024-12-23T13:44:51Z) - The Typing Cure: Experiences with Large Language Model Chatbots for Mental Health Support [32.60242402941811]
People experiencing severe distress increasingly use Large Language Model (LLM) chatbots as mental health support tools.<n>This study builds on interviews with 21 individuals from globally diverse backgrounds to analyze how users create unique support roles.<n>We introduce the concept of therapeutic alignment, or aligning AI with therapeutic values for mental health contexts.
arXiv Detail & Related papers (2024-01-25T18:08:53Z) - Evaluating Chatbots to Promote Users' Trust -- Practices and Open
Problems [11.427175278545517]
This paper reviews current practices for testing chatbots.
It identifies gaps as open problems in pursuit of user trust.
It outlines a path forward to mitigate issues of trust related to service or product performance, user satisfaction and long-term unintended consequences for society.
arXiv Detail & Related papers (2023-09-09T22:40:30Z) - Chatbots put to the test in math and logic problems: A preliminary
comparison and assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard [68.8204255655161]
We use 30 questions that are clear, without any ambiguities, fully described with plain text only, and have a unique, well defined correct answer.
The answers are recorded and discussed, highlighting their strengths and weaknesses.
It was found that ChatGPT-4 outperforms ChatGPT-3.5 in both sets of questions.
arXiv Detail & Related papers (2023-05-30T11:18:05Z) - Mental Health Assessment for the Chatbots [39.081479891611664]
We argue that it should have a healthy mental tendency in order to avoid the negative psychological impact on them.
We establish several mental health assessment dimensions for chatbots and introduce the questionnaire-based mental health assessment methods.
arXiv Detail & Related papers (2022-01-14T10:38:59Z) - A Deep Learning Approach to Integrate Human-Level Understanding in a
Chatbot [0.4632366780742501]
Unlike humans, chatbots can serve multiple customers at a time, are available 24/7 and reply in less than a fraction of a second.
We performed sentiment analysis, emotion detection, intent classification and named-entity recognition using deep learning to develop chatbots with humanistic understanding and intelligence.
arXiv Detail & Related papers (2021-12-31T22:26:41Z) - EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments [75.11753644302385]
Empathetic conversational agents should not only understand what is being discussed, but also acknowledge the implied feelings of the conversation partner.
We propose a method based on a transformer pretrained language model (T5)
We evaluate our model on the EmpatheticDialogues dataset using both automated metrics and human evaluation.
arXiv Detail & Related papers (2021-10-30T19:04:48Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.