How Can AI Recognize Pain and Express Empathy
- URL: http://arxiv.org/abs/2110.04249v1
- Date: Fri, 8 Oct 2021 16:58:57 GMT
- Title: How Can AI Recognize Pain and Express Empathy
- Authors: Siqi Cao, Di Fu, Xu Yang, Pablo Barros, Stefan Wermter, Xun Liu,
Haiyan Wu
- Abstract summary: Sensory and emotional experiences such as pain and empathy are relevant to mental and physical health.
The current drive for automated pain recognition is motivated by a growing number of healthcare requirements.
We review the current developments for computational pain recognition and artificial empathy implementation.
- Score: 18.71528144336154
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Sensory and emotional experiences such as pain and empathy are relevant to
mental and physical health. The current drive for automated pain recognition is
motivated by a growing number of healthcare requirements and demands for social
interaction make it increasingly essential. Despite being a trending area, they
have not been explored in great detail. Over the past decades, behavioral
science and neuroscience have uncovered mechanisms that explain the
manifestations of pain. Recently, also artificial intelligence research has
allowed empathic machine learning methods to be approachable. Generally, the
purpose of this paper is to review the current developments for computational
pain recognition and artificial empathy implementation. Our discussion covers
the following topics: How can AI recognize pain from unimodality and
multimodality? Is it necessary for AI to be empathic? How can we create an AI
agent with proactive and reactive empathy? This article explores the challenges
and opportunities of real-world multimodal pain recognition from a
psychological, neuroscientific, and artificial intelligence perspective.
Finally, we identify possible future implementations of artificial empathy and
analyze how humans might benefit from an AI agent equipped with empathy.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users [1.4987128108059977]
We argue that different constellations of capabilities associated with empathy are important for different empathic AI applications.
We conclude by discussing why appreciation of the diverse capabilities under the empathy umbrella is important for both AI creators and users.
arXiv Detail & Related papers (2024-08-27T18:27:22Z) - APTNESS: Incorporating Appraisal Theory and Emotion Support Strategies for Empathetic Response Generation [71.26755736617478]
Empathetic response generation is designed to comprehend the emotions of others.
We develop a framework that combines retrieval augmentation and emotional support strategy integration.
Our framework can enhance the empathy ability of LLMs from both cognitive and affective empathy perspectives.
arXiv Detail & Related papers (2024-07-23T02:23:37Z) - Enablers and Barriers of Empathy in Software Developer and User
Interaction: A Mixed Methods Case Study [11.260371501613994]
We studied how empathy is practised between developers and end users.
We identified the nature of awareness required to trigger empathy and enablers of empathy.
We discovered barriers to empathy and a set of potential strategies to overcome these barriers.
arXiv Detail & Related papers (2024-01-17T06:42:21Z) - The Good, The Bad, and Why: Unveiling Emotions in Generative AI [73.94035652867618]
We show that EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - Artificial Empathy Classification: A Survey of Deep Learning Techniques,
Datasets, and Evaluation Scales [0.0]
This paper aims to investigate and evaluate existing works for measuring and evaluating empathy, as well as the datasets that have been collected and used so far.
Our goal is to highlight and facilitate the use of state-of-the-art methods in the area of AE by comparing their performance.
arXiv Detail & Related papers (2023-09-04T16:02:59Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Human-AI Collaboration Enables More Empathic Conversations in Text-based
Peer-to-Peer Mental Health Support [10.743204843534512]
We develop Hailey, an AI-in-the-loop agent that provides just-in-time feedback to help participants who provide support (peer supporters) respond more empathically to those seeking help (support seekers)
We show that our Human-AI collaboration approach leads to a 19.60% increase in conversational empathy between peers overall.
We find a larger 38.88% increase in empathy within the subsample of peer supporters who self-identify as experiencing difficulty providing support.
arXiv Detail & Related papers (2022-03-28T23:37:08Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Modeling User Empathy Elicited by a Robot Storyteller [2.309914459672557]
We present the first approach to modeling user empathy elicited during interactions with a robotic agent.
We conducted experiments with 8 classical machine learning models and 2 deep learning models to detect empathy.
Our highest-performing approach, based on XGBoost, achieved an accuracy of 69% and AUC of 72% when detecting empathy in videos.
arXiv Detail & Related papers (2021-07-29T21:56:19Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.