Empathetic Conversational Systems: A Review of Current Advances, Gaps,
and Opportunities
- URL: http://arxiv.org/abs/2206.05017v2
- Date: Wed, 9 Nov 2022 00:02:58 GMT
- Title: Empathetic Conversational Systems: A Review of Current Advances, Gaps,
and Opportunities
- Authors: Aravind Sesagiri Raamkumar, Yinping Yang
- Abstract summary: A growing number of studies have recognized the benefits of empathy and started to incorporate empathy in conversational systems.
This paper examines this rapidly growing field using five review dimensions.
- Score: 2.741266294612776
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Empathy is a vital factor that contributes to mutual understanding, and joint
problem-solving. In recent years, a growing number of studies have recognized
the benefits of empathy and started to incorporate empathy in conversational
systems. We refer to this topic as empathetic conversational systems. To
identify the critical gaps and future opportunities in this topic, this paper
examines this rapidly growing field using five review dimensions: (i)
conceptual empathy models and frameworks, (ii) adopted empathy-related
concepts, (iii) datasets and algorithmic techniques developed, (iv) evaluation
strategies, and (v) state-of-the-art approaches. The findings show that most
studies have centered on the use of the EMPATHETICDIALOGUES dataset, and the
text-based modality dominates research in this field. Studies mainly focused on
extracting features from the messages of the users and the conversational
systems, with minimal emphasis on user modeling and profiling. Notably, studies
that have incorporated emotion causes, external knowledge, and affect matching
in the response generation models, have obtained significantly better results.
For implementation in diverse real-world settings, we recommend that future
studies should address key gaps in areas of detecting and authenticating
emotions at the entity level, handling multimodal inputs, displaying more
nuanced empathetic behaviors, and encompassing additional dialogue system
features.
Related papers
- Empathy Detection from Text, Audiovisual, Audio or Physiological Signals: Task Formulations and Machine Learning Methods [5.7306786636466995]
Detecting empathy has potential applications in society, healthcare and education.
Despite being a broad and overlapping topic, the avenue of empathy detection leveraging Machine Learning remains underexplored.
We discuss challenges, research gaps and potential applications in the Affective Computing-based empathy domain.
arXiv Detail & Related papers (2023-10-30T08:34:12Z) - Automatic Sensor-free Affect Detection: A Systematic Literature Review [0.0]
This paper provides a comprehensive literature review on sensor-free affect detection.
Despite the field's evident maturity, demonstrated by the consistent performance of the models, there is ample scope for future research.
There is also a need to refine model development practices and methods.
arXiv Detail & Related papers (2023-10-11T13:24:27Z) - Re-mine, Learn and Reason: Exploring the Cross-modal Semantic
Correlations for Language-guided HOI detection [57.13665112065285]
Human-Object Interaction (HOI) detection is a challenging computer vision task.
We present a framework that enhances HOI detection by incorporating structured text knowledge.
arXiv Detail & Related papers (2023-07-25T14:20:52Z) - Expanding the Role of Affective Phenomena in Multimodal Interaction
Research [57.069159905961214]
We examined over 16,000 papers from selected conferences in multimodal interaction, affective computing, and natural language processing.
We identify 910 affect-related papers and present our analysis of the role of affective phenomena in these papers.
We find limited research on how affect and emotion predictions might be used by AI systems to enhance machine understanding of human social behaviors and cognitive states.
arXiv Detail & Related papers (2023-05-18T09:08:39Z) - Social Influence Dialogue Systems: A Scoping Survey of the Efforts
Towards Influence Capabilities of Dialogue Systems [50.57882213439553]
Social influence dialogue systems are capable of persuasion, negotiation, and therapy.
There exists no formal definition or category for dialogue systems with these skills.
This study serves as a comprehensive reference for social influence dialogue systems to inspire more dedicated research and discussion in this emerging area.
arXiv Detail & Related papers (2022-10-11T17:57:23Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - Temporal aggregation of audio-visual modalities for emotion recognition [0.5352699766206808]
We propose a multimodal fusion technique for emotion recognition based on combining audio-visual modalities from a temporal window with different temporal offsets for each modality.
Our proposed method outperforms other methods from the literature and human accuracy rating.
arXiv Detail & Related papers (2020-07-08T18:44:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.