Toward Affective XAI: Facial Affect Analysis for Understanding
Explainable Human-AI Interactions
- URL: http://arxiv.org/abs/2106.08761v1
- Date: Wed, 16 Jun 2021 13:14:21 GMT
- Title: Toward Affective XAI: Facial Affect Analysis for Understanding
Explainable Human-AI Interactions
- Authors: Luke Guerdan, Alex Raymond, and Hatice Gunes
- Abstract summary: This work aims to identify which facial affect features are pronounced when people interact with XAI interfaces.
We also develop a multitask feature embedding for linking facial affect signals with participants' use of explanations.
- Score: 4.874780144224057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning approaches are increasingly used to augment human
decision-making, eXplainable Artificial Intelligence (XAI) research has
explored methods for communicating system behavior to humans. However, these
approaches often fail to account for the emotional responses of humans as they
interact with explanations. Facial affect analysis, which examines human facial
expressions of emotions, is one promising lens for understanding how users
engage with explanations. Therefore, in this work, we aim to (1) identify which
facial affect features are pronounced when people interact with XAI interfaces,
and (2) develop a multitask feature embedding for linking facial affect signals
with participants' use of explanations. Our analyses and results show that the
occurrence and values of facial AU1 and AU4, and Arousal are heightened when
participants fail to use explanations effectively. This suggests that facial
affect analysis should be incorporated into XAI to personalize explanations to
individuals' interaction styles and to adapt explanations based on the
difficulty of the task performed.
Related papers
- Bridging Human Concepts and Computer Vision for Explainable Face Verification [2.9602845959184454]
We present an approach to combine computer and human vision to increase the explanation's interpretability of a face verification algorithm.
In particular, we are inspired by the human perceptual process to understand how machines perceive face's human-semantic areas.
arXiv Detail & Related papers (2024-01-30T09:13:49Z) - I am Only Happy When There is Light: The Impact of Environmental Changes
on Affective Facial Expressions Recognition [65.69256728493015]
We study the impact of different image conditions on the recognition of arousal from human facial expressions.
Our results show how the interpretation of human affective states can differ greatly in either the positive or negative direction.
arXiv Detail & Related papers (2022-10-28T16:28:26Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Learning Graph Representation of Person-specific Cognitive Processes
from Audio-visual Behaviours for Automatic Personality Recognition [17.428626029689653]
We propose to represent the target subjects person-specific cognition in the form of a person-specific CNN architecture.
Each person-specific CNN is explored by the Neural Architecture Search (NAS) and a novel adaptive loss function.
Experimental results show that the produced graph representations are well associated with target subjects' personality traits.
arXiv Detail & Related papers (2021-10-26T11:04:23Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z) - Introducing Representations of Facial Affect in Automated Multimodal
Deception Detection [18.16596562087374]
Automated deception detection systems can enhance health, justice, and security in society.
This paper presents a novel analysis of the power of dimensional representations of facial affect for automated deception detection.
We used a video dataset of people communicating truthfully or deceptively in real-world, high-stakes courtroom situations.
arXiv Detail & Related papers (2020-08-31T05:12:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.