Bridging Human Concepts and Computer Vision for Explainable Face Verification
- URL: http://arxiv.org/abs/2403.08789v1
- Date: Tue, 30 Jan 2024 09:13:49 GMT
- Title: Bridging Human Concepts and Computer Vision for Explainable Face Verification
- Authors: Miriam Doh, Caroline Mazini Rodrigues, Nicolas Boutry, Laurent Najman, Matei Mancas, Hugues Bersini,
- Abstract summary: We present an approach to combine computer and human vision to increase the explanation's interpretability of a face verification algorithm.
In particular, we are inspired by the human perceptual process to understand how machines perceive face's human-semantic areas.
- Score: 2.9602845959184454
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With Artificial Intelligence (AI) influencing the decision-making process of sensitive applications such as Face Verification, it is fundamental to ensure the transparency, fairness, and accountability of decisions. Although Explainable Artificial Intelligence (XAI) techniques exist to clarify AI decisions, it is equally important to provide interpretability of these decisions to humans. In this paper, we present an approach to combine computer and human vision to increase the explanation's interpretability of a face verification algorithm. In particular, we are inspired by the human perceptual process to understand how machines perceive face's human-semantic areas during face comparison tasks. We use Mediapipe, which provides a segmentation technique that identifies distinct human-semantic facial regions, enabling the machine's perception analysis. Additionally, we adapted two model-agnostic algorithms to provide human-interpretable insights into the decision-making processes.
Related papers
- From Pixels to Words: Leveraging Explainability in Face Recognition through Interactive Natural Language Processing [2.7568948557193287]
Face Recognition (FR) has advanced significantly with the development of deep learning, achieving high accuracy in several applications.
The lack of interpretability of these systems raises concerns about their accountability, fairness, and reliability.
We propose an interactive framework to enhance the explainability of FR models by combining model-agnostic Explainable Artificial Intelligence (XAI) and Natural Language Processing (NLP) techniques.
arXiv Detail & Related papers (2024-09-24T13:40:39Z) - Explaining Deep Face Algorithms through Visualization: A Survey [57.60696799018538]
This work undertakes a first-of-its-kind meta-analysis of explainability algorithms in the face domain.
We review existing face explainability works and reveal valuable insights into the structure and hierarchy of face networks.
arXiv Detail & Related papers (2023-09-26T07:16:39Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - Machine Explanations and Human Understanding [31.047297225560566]
Explanations are hypothesized to improve human understanding of machine learning models.
empirical studies have found mixed and even negative results.
We show how human intuitions play a central role in enabling human understanding.
arXiv Detail & Related papers (2022-02-08T19:00:38Z) - Toward Affective XAI: Facial Affect Analysis for Understanding
Explainable Human-AI Interactions [4.874780144224057]
This work aims to identify which facial affect features are pronounced when people interact with XAI interfaces.
We also develop a multitask feature embedding for linking facial affect signals with participants' use of explanations.
arXiv Detail & Related papers (2021-06-16T13:14:21Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Projection: A Mechanism for Human-like Reasoning in Artificial
Intelligence [6.218613353519724]
Methods of inference exploiting top-down information (from a model) have been shown to be effective for recognising entities in difficult conditions.
Projection is shown to be a key mechanism to solve the problem of applying knowledge to varied or challenging situations.
arXiv Detail & Related papers (2021-03-24T22:33:51Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Human Evaluation of Interpretability: The Case of AI-Generated Music
Knowledge [19.508678969335882]
We focus on evaluating AI-discovered knowledge/rules in the arts and humanities.
We present an experimental procedure to collect and assess human-generated verbal interpretations of AI-generated music theory/rules rendered as sophisticated symbolic/numeric objects.
arXiv Detail & Related papers (2020-04-15T06:03:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.