How Can AI Recognize Pain and Express Empathy
- URL: http://arxiv.org/abs/2110.04249v1
- Date: Fri, 8 Oct 2021 16:58:57 GMT
- Title: How Can AI Recognize Pain and Express Empathy
- Authors: Siqi Cao, Di Fu, Xu Yang, Pablo Barros, Stefan Wermter, Xun Liu,
Haiyan Wu
- Abstract summary: Sensory and emotional experiences such as pain and empathy are relevant to mental and physical health.
The current drive for automated pain recognition is motivated by a growing number of healthcare requirements.
We review the current developments for computational pain recognition and artificial empathy implementation.
- Score: 18.71528144336154
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Sensory and emotional experiences such as pain and empathy are relevant to
mental and physical health. The current drive for automated pain recognition is
motivated by a growing number of healthcare requirements and demands for social
interaction make it increasingly essential. Despite being a trending area, they
have not been explored in great detail. Over the past decades, behavioral
science and neuroscience have uncovered mechanisms that explain the
manifestations of pain. Recently, also artificial intelligence research has
allowed empathic machine learning methods to be approachable. Generally, the
purpose of this paper is to review the current developments for computational
pain recognition and artificial empathy implementation. Our discussion covers
the following topics: How can AI recognize pain from unimodality and
multimodality? Is it necessary for AI to be empathic? How can we create an AI
agent with proactive and reactive empathy? This article explores the challenges
and opportunities of real-world multimodal pain recognition from a
psychological, neuroscientific, and artificial intelligence perspective.
Finally, we identify possible future implementations of artificial empathy and
analyze how humans might benefit from an AI agent equipped with empathy.
Related papers
- Enablers and Barriers of Empathy in Software Developer and User
Interaction: A Mixed Methods Case Study [11.260371501613994]
We studied how empathy is practised between developers and end users.
We identified the nature of awareness required to trigger empathy and enablers of empathy.
We discovered barriers to empathy and a set of potential strategies to overcome these barriers.
arXiv Detail & Related papers (2024-01-17T06:42:21Z) - What should I say? -- Interacting with AI and Natural Language
Interfaces [0.0]
The Human-AI Interaction (HAI) sub-field has emerged from the Human-Computer Interaction (HCI) field and aims to examine this very notion.
Prior research suggests that theory of mind representations are crucial to successful and effortless communication, however very little is understood when it comes to how theory of mind representations are established when interacting with AI.
arXiv Detail & Related papers (2024-01-12T05:10:23Z) - The Good, The Bad, and Why: Unveiling Emotions in Generative AI [73.94035652867618]
We show that EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - Artificial Empathy Classification: A Survey of Deep Learning Techniques,
Datasets, and Evaluation Scales [0.0]
This paper aims to investigate and evaluate existing works for measuring and evaluating empathy, as well as the datasets that have been collected and used so far.
Our goal is to highlight and facilitate the use of state-of-the-art methods in the area of AE by comparing their performance.
arXiv Detail & Related papers (2023-09-04T16:02:59Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Human-AI Collaboration Enables More Empathic Conversations in Text-based
Peer-to-Peer Mental Health Support [10.743204843534512]
We develop Hailey, an AI-in-the-loop agent that provides just-in-time feedback to help participants who provide support (peer supporters) respond more empathically to those seeking help (support seekers)
We show that our Human-AI collaboration approach leads to a 19.60% increase in conversational empathy between peers overall.
We find a larger 38.88% increase in empathy within the subsample of peer supporters who self-identify as experiencing difficulty providing support.
arXiv Detail & Related papers (2022-03-28T23:37:08Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Modeling User Empathy Elicited by a Robot Storyteller [2.309914459672557]
We present the first approach to modeling user empathy elicited during interactions with a robotic agent.
We conducted experiments with 8 classical machine learning models and 2 deep learning models to detect empathy.
Our highest-performing approach, based on XGBoost, achieved an accuracy of 69% and AUC of 72% when detecting empathy in videos.
arXiv Detail & Related papers (2021-07-29T21:56:19Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.