Human-AI Collaboration Enables More Empathic Conversations in Text-based
Peer-to-Peer Mental Health Support
- URL: http://arxiv.org/abs/2203.15144v1
- Date: Mon, 28 Mar 2022 23:37:08 GMT
- Title: Human-AI Collaboration Enables More Empathic Conversations in Text-based
Peer-to-Peer Mental Health Support
- Authors: Ashish Sharma, Inna W. Lin, Adam S. Miner, David C. Atkins, Tim
Althoff
- Abstract summary: We develop Hailey, an AI-in-the-loop agent that provides just-in-time feedback to help participants who provide support (peer supporters) respond more empathically to those seeking help (support seekers)
We show that our Human-AI collaboration approach leads to a 19.60% increase in conversational empathy between peers overall.
We find a larger 38.88% increase in empathy within the subsample of peer supporters who self-identify as experiencing difficulty providing support.
- Score: 10.743204843534512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in artificial intelligence (AI) are enabling systems that augment
and collaborate with humans to perform simple, mechanistic tasks like
scheduling meetings and grammar-checking text. However, such Human-AI
collaboration poses challenges for more complex, creative tasks, such as
carrying out empathic conversations, due to difficulties of AI systems in
understanding complex human emotions and the open-ended nature of these tasks.
Here, we focus on peer-to-peer mental health support, a setting in which
empathy is critical for success, and examine how AI can collaborate with humans
to facilitate peer empathy during textual, online supportive conversations. We
develop Hailey, an AI-in-the-loop agent that provides just-in-time feedback to
help participants who provide support (peer supporters) respond more
empathically to those seeking help (support seekers). We evaluate Hailey in a
non-clinical randomized controlled trial with real-world peer supporters on
TalkLife (N=300), a large online peer-to-peer support platform. We show that
our Human-AI collaboration approach leads to a 19.60% increase in
conversational empathy between peers overall. Furthermore, we find a larger
38.88% increase in empathy within the subsample of peer supporters who
self-identify as experiencing difficulty providing support. We systematically
analyze the Human-AI collaboration patterns and find that peer supporters are
able to use the AI feedback both directly and indirectly without becoming
overly reliant on AI while reporting improved self-efficacy post-feedback. Our
findings demonstrate the potential of feedback-driven, AI-in-the-loop writing
systems to empower humans in open-ended, social, creative tasks such as
empathic conversations.
Related papers
- APTNESS: Incorporating Appraisal Theory and Emotion Support Strategies for Empathetic Response Generation [71.26755736617478]
Empathetic response generation is designed to comprehend the emotions of others.
We develop a framework that combines retrieval augmentation and emotional support strategy integration.
Our framework can enhance the empathy ability of LLMs from both cognitive and affective empathy perspectives.
arXiv Detail & Related papers (2024-07-23T02:23:37Z) - The Role of AI in Peer Support for Young People: A Study of Preferences for Human- and AI-Generated Responses [16.35125470386213]
As social media becomes young people's main method of peer support exchange, we need to understand when and how AI can facilitate and assist in such exchanges.
We asked 622 young people to complete an online survey and evaluate blinded human- and AI-generated responses to help-seeking messages.
We found that participants preferred the AI-generated response to situations about relationships, self-expression, and physical health.
We discuss the role of training in online peer support exchange and its implications for supporting young people's well-being.
arXiv Detail & Related papers (2024-05-04T16:53:19Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions [67.60397632819202]
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal.
We identify a set of underlying technical challenges and open questions for researchers across computing communities to advance Social-AI.
arXiv Detail & Related papers (2024-04-17T02:57:42Z) - Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review [6.013543974938446]
Leveraging Artificial Intelligence in decision support systems has disproportionately focused on technological advancements.
A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes.
arXiv Detail & Related papers (2023-10-30T17:46:38Z) - Enhancing Human Capabilities through Symbiotic Artificial Intelligence
with Shared Sensory Experiences [6.033393331015051]
We introduce a novel concept in Human-AI interaction called Symbiotic Artificial Intelligence with Shared Sensory Experiences (SAISSE)
SAISSE aims to establish a mutually beneficial relationship between AI systems and human users through shared sensory experiences.
We discuss the incorporation of memory storage units for long-term growth and development of both the AI system and its human user.
arXiv Detail & Related papers (2023-05-26T04:13:59Z) - Improving Grounded Language Understanding in a Collaborative Environment
by Interacting with Agents Through Help Feedback [42.19685958922537]
We argue that human-AI collaboration should be interactive, with humans monitoring the work of AI agents and providing feedback that the agent can understand and utilize.
In this work, we explore these directions using the challenging task defined by the IGLU competition, an interactive grounded language understanding task in a MineCraft-like world.
arXiv Detail & Related papers (2023-04-21T05:37:59Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Towards Facilitating Empathic Conversations in Online Mental Health
Support: A Reinforcement Learning Approach [10.19931220479239]
Psychologists have repeatedly demonstrated that empathy is a key component leading to positive outcomes in supportive conversations.
Recent studies have shown that highly empathic conversations are rare in online mental health platforms.
We introduce a new task of empathic rewriting which aims to transform low-empathy conversational posts to higher empathy.
arXiv Detail & Related papers (2021-01-19T16:37:58Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.