What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users
- URL: http://arxiv.org/abs/2408.15354v1
- Date: Tue, 27 Aug 2024 18:27:22 GMT
- Title: What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users
- Authors: Jana Schaich Borg, Hannah Read,
- Abstract summary: We argue that different constellations of capabilities associated with empathy are important for different empathic AI applications.
We conclude by discussing why appreciation of the diverse capabilities under the empathy umbrella is important for both AI creators and users.
- Score: 1.4987128108059977
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Interest is growing in artificial empathy, but so is confusion about what artificial empathy is or needs to be. This confusion makes it challenging to navigate the technical and ethical issues that accompany empathic AI development. Here, we outline a framework for thinking about empathic AI based on the premise that different constellations of capabilities associated with empathy are important for different empathic AI applications. We describe distinctions of capabilities that we argue belong under the empathy umbrella, and show how three medical empathic AI use cases require different sets of these capabilities. We conclude by discussing why appreciation of the diverse capabilities under the empathy umbrella is important for both AI creators and users.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Rolling in the deep of cognitive and AI biases [1.556153237434314]
We argue that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed.
We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview.
We introduce a new mapping, which justifies the humans to AI biases and we detect relevant fairness intensities and inter-dependencies.
arXiv Detail & Related papers (2024-07-30T21:34:04Z) - APTNESS: Incorporating Appraisal Theory and Emotion Support Strategies for Empathetic Response Generation [71.26755736617478]
Empathetic response generation is designed to comprehend the emotions of others.
We develop a framework that combines retrieval augmentation and emotional support strategy integration.
Our framework can enhance the empathy ability of LLMs from both cognitive and affective empathy perspectives.
arXiv Detail & Related papers (2024-07-23T02:23:37Z) - Enablers and Barriers of Empathy in Software Developer and User
Interaction: A Mixed Methods Case Study [11.260371501613994]
We studied how empathy is practised between developers and end users.
We identified the nature of awareness required to trigger empathy and enablers of empathy.
We discovered barriers to empathy and a set of potential strategies to overcome these barriers.
arXiv Detail & Related papers (2024-01-17T06:42:21Z) - Beyond Bias and Compliance: Towards Individual Agency and Plurality of
Ethics in AI [0.0]
We argue that the way data is labeled plays an essential role in the way AI behaves.
We propose an alternative path that allows for the plurality of values and the freedom of individual expression.
arXiv Detail & Related papers (2023-02-23T16:33:40Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Human-AI Collaboration Enables More Empathic Conversations in Text-based
Peer-to-Peer Mental Health Support [10.743204843534512]
We develop Hailey, an AI-in-the-loop agent that provides just-in-time feedback to help participants who provide support (peer supporters) respond more empathically to those seeking help (support seekers)
We show that our Human-AI collaboration approach leads to a 19.60% increase in conversational empathy between peers overall.
We find a larger 38.88% increase in empathy within the subsample of peer supporters who self-identify as experiencing difficulty providing support.
arXiv Detail & Related papers (2022-03-28T23:37:08Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - How Can AI Recognize Pain and Express Empathy [18.71528144336154]
Sensory and emotional experiences such as pain and empathy are relevant to mental and physical health.
The current drive for automated pain recognition is motivated by a growing number of healthcare requirements.
We review the current developments for computational pain recognition and artificial empathy implementation.
arXiv Detail & Related papers (2021-10-08T16:58:57Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.