Coral: An Approach for Conversational Agents in Mental Health
Applications
- URL: http://arxiv.org/abs/2111.08545v1
- Date: Tue, 16 Nov 2021 15:15:58 GMT
- Title: Coral: An Approach for Conversational Agents in Mental Health
Applications
- Authors: Harsh Sakhrani, Saloni Parekh, Shubham Mahajan
- Abstract summary: We present an approach for creating a generative empathetic open-domain robot that can be used for mental health applications.
We leverage large scale pre-training and empathetic conversational data to make the responses more empathetic in nature.
Our models achieve state-of-the-art results on the Empathetic Dialogues test set.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: It may be difficult for some individuals to open up and share their thoughts
and feelings in front of a mental health expert. For those who are more at ease
with a virtual agent, conversational agents can serve as an intermediate step
in the right direction. The conversational agent must therefore be empathetic
and able to conduct free-flowing conversations. To this effect, we present an
approach for creating a generative empathetic open-domain chatbot that can be
used for mental health applications. We leverage large scale pre-training and
empathetic conversational data to make the responses more empathetic in nature
and a multi-turn dialogue arrangement to maintain context. Our models achieve
state-of-the-art results on the Empathetic Dialogues test set.
Related papers
- GOMA: Proactive Embodied Cooperative Communication via Goal-Oriented Mental Alignment [72.96949760114575]
We propose a novel cooperative communication framework, Goal-Oriented Mental Alignment (GOMA)
GOMA formulates verbal communication as a planning problem that minimizes the misalignment between parts of agents' mental states that are relevant to the goals.
We evaluate our approach against strong baselines in two challenging environments, Overcooked (a multiplayer game) and VirtualHome (a household simulator)
arXiv Detail & Related papers (2024-03-17T03:52:52Z) - Response-act Guided Reinforced Dialogue Generation for Mental Health
Counseling [25.524804770124145]
We present READER, a dialogue-act guided response generator for mental health counseling conversations.
READER is built on transformer to jointly predict a potential dialogue-act d(t+1) for the next utterance (aka response-act) and to generate an appropriate response u(t+1)
We evaluate READER on HOPE, a benchmark counseling conversation dataset.
arXiv Detail & Related papers (2023-01-30T08:53:35Z) - ProsocialDialog: A Prosocial Backbone for Conversational Agents [104.92776607564583]
We introduce ProsocialDialog, the first large-scale dialogue dataset to teach conversational agents to respond to problematic content following social norms.
Created via a human-AI collaborative framework, ProsocialDialog consists of 58K dialogues, with 331K utterances, 160K RoTs, and 497K dialogue safety labels.
With this dataset, we introduce a dialogue safety detection module, Canary, capable of generating RoTs given conversational context, and a socially-informed dialogue agent, Prost.
arXiv Detail & Related papers (2022-05-25T11:48:47Z) - EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments [75.11753644302385]
Empathetic conversational agents should not only understand what is being discussed, but also acknowledge the implied feelings of the conversation partner.
We propose a method based on a transformer pretrained language model (T5)
We evaluate our model on the EmpatheticDialogues dataset using both automated metrics and human evaluation.
arXiv Detail & Related papers (2021-10-30T19:04:48Z) - Towards Emotion-Aware Agents For Negotiation Dialogues [2.1454205511807234]
Negotiation is a complex social interaction that encapsulates emotional encounters in human decision-making.
Virtual agents that can negotiate with humans are useful in pedagogy and conversational AI.
We analyze the extent to which emotion attributes extracted from the negotiation help in the prediction.
arXiv Detail & Related papers (2021-07-28T04:42:36Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Towards Facilitating Empathic Conversations in Online Mental Health
Support: A Reinforcement Learning Approach [10.19931220479239]
Psychologists have repeatedly demonstrated that empathy is a key component leading to positive outcomes in supportive conversations.
Recent studies have shown that highly empathic conversations are rare in online mental health platforms.
We introduce a new task of empathic rewriting which aims to transform low-empathy conversational posts to higher empathy.
arXiv Detail & Related papers (2021-01-19T16:37:58Z) - A Taxonomy of Empathetic Response Intents in Human Social Conversations [1.52292571922932]
Open-domain conversational agents are becoming increasingly popular in the natural language processing community.
One of the challenges is enabling them to converse in an empathetic manner.
Current neural response generation methods rely solely on end-to-end learning from large scale conversation data to generate dialogues.
Recent work has shown the promise of combining dialogue act/intent modelling and neural response generation.
arXiv Detail & Related papers (2020-12-07T21:56:45Z) - Will I Sound Like Me? Improving Persona Consistency in Dialogues through
Pragmatic Self-Consciousness [62.55060760615656]
Recent models tackling consistency often train with additional Natural Language Inference (NLI) labels or attach trained extra modules to the generative agent for maintaining consistency.
Inspired by social cognition and pragmatics, we endow existing dialogue agents with public self-consciousness on the fly through an imaginary listener.
Our approach, based on the Rational Speech Acts framework, can enforce dialogue agents to refrain from uttering contradiction.
arXiv Detail & Related papers (2020-04-13T08:16:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.