Towards Human-Understandable Visual Explanations:Imperceptible
High-frequency Cues Can Better Be Removed
- URL: http://arxiv.org/abs/2104.07954v1
- Date: Fri, 16 Apr 2021 08:11:30 GMT
- Title: Towards Human-Understandable Visual Explanations:Imperceptible
High-frequency Cues Can Better Be Removed
- Authors: Kaili Wang, Jose Oramas, Tinne Tuytelaars
- Abstract summary: We argue that the capabilities of humans, constrained by the Human Visual System (HVS) and psychophysics, need to be taken into account.
We conduct a case study regarding the classification of real vs. fake face images, where many of the distinguishing features picked up by standard neural networks turn out not to be perceptible to humans.
- Score: 46.36600006968488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable AI (XAI) methods focus on explaining what a neural network has
learned - in other words, identifying the features that are the most
influential to the prediction. In this paper, we call them "distinguishing
features". However, whether a human can make sense of the generated explanation
also depends on the perceptibility of these features to humans. To make sure an
explanation is human-understandable, we argue that the capabilities of humans,
constrained by the Human Visual System (HVS) and psychophysics, need to be
taken into account. We propose the {\em human perceptibility principle for
XAI}, stating that, to generate human-understandable explanations, neural
networks should be steered towards focusing on human-understandable cues during
training. We conduct a case study regarding the classification of real vs. fake
face images, where many of the distinguishing features picked up by standard
neural networks turn out not to be perceptible to humans. By applying the
proposed principle, a neural network with human-understandable explanations is
trained which, in a user study, is shown to better align with human intuition.
This is likely to make the AI more trustworthy and opens the door to humans
learning from machines. In the case study, we specifically investigate and
analyze the behaviour of the human-imperceptible high spatial frequency
features in neural networks and XAI methods.
Related papers
- HINT: Learning Complete Human Neural Representations from Limited Viewpoints [69.76947323932107]
We propose a NeRF-based algorithm able to learn a detailed and complete human model from limited viewing angles.
As a result, our method can reconstruct complete humans even from a few viewing angles, increasing performance by more than 15% PSNR.
arXiv Detail & Related papers (2024-05-30T05:43:09Z) - The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations [3.7673721058583123]
We propose a shift from post-hoc explainability to designing interpretable neural network architectures.
We identify five needs of human-centric XAI and propose two schemes for interpretable-by-design neural network.
arXiv Detail & Related papers (2023-07-01T15:24:47Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Matching Representations of Explainable Artificial Intelligence and Eye
Gaze for Human-Machine Interaction [0.7742297876120561]
Rapid non-verbal communication of task-based stimuli is a challenge in human-machine teaming.
In this work, we examine the correlations between visual heatmap explanations of a neural network trained to predict driving behavior and eye gaze heatmaps of human drivers.
arXiv Detail & Related papers (2021-01-30T07:42:56Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z) - Self-explaining AI as an alternative to interpretable AI [0.0]
Double descent indicates that deep neural networks operate by smoothly interpolating between data points.
Neural networks trained on complex real world data are inherently hard to interpret and prone to failure if asked to extrapolate.
Self-explaining AIs are capable of providing a human-understandable explanation along with confidence levels for both the decision and explanation.
arXiv Detail & Related papers (2020-02-12T18:50:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.