Debiased-CAM for bias-agnostic faithful visual explanations of deep
convolutional networks
- URL: http://arxiv.org/abs/2012.05567v1
- Date: Thu, 10 Dec 2020 10:28:47 GMT
- Title: Debiased-CAM for bias-agnostic faithful visual explanations of deep
convolutional networks
- Authors: Wencan Zhang, Mariella Dimiccoli, Brian Y. Lim
- Abstract summary: Class activation maps (CAMs) explain convolutional neural network predictions by identifying salient pixels.
CAM explanations become more deviated and unfaithful with increased image bias.
We present Debiased-CAM to recover explanation faithfulness across various bias types and levels.
- Score: 10.403206672504664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Class activation maps (CAMs) explain convolutional neural network predictions
by identifying salient pixels, but they become misaligned and misleading when
explaining predictions on images under bias, such as images blurred
accidentally or deliberately for privacy protection, or images with improper
white balance. Despite model fine-tuning to improve prediction performance on
these biased images, we demonstrate that CAM explanations become more deviated
and unfaithful with increased image bias. We present Debiased-CAM to recover
explanation faithfulness across various bias types and levels by training a
multi-input, multi-task model with auxiliary tasks for CAM and bias level
predictions. With CAM as a prediction task, explanations are made tunable by
retraining the main model layers and made faithful by self-supervised learning
from CAMs of unbiased images. The model provides representative, bias-agnostic
CAM explanations about the predictions on biased images as if generated from
their unbiased form. In four simulation studies with different biases and
prediction tasks, Debiased-CAM improved both CAM faithfulness and task
performance. We further conducted two controlled user studies to validate its
truthfulness and helpfulness, respectively. Quantitative and qualitative
analyses of participant responses confirmed Debiased-CAM as more truthful and
helpful. Debiased-CAM thus provides a basis to generate more faithful and
relevant explanations for a wide range of real-world applications with various
sources of bias.
Related papers
- Images Speak Louder than Words: Understanding and Mitigating Bias in Vision-Language Model from a Causal Mediation Perspective [13.486497323758226]
Vision-language models pre-trained on extensive datasets can inadvertently learn biases by correlating gender information with objects or scenarios.
We propose a framework that incorporates causal mediation analysis to measure and map the pathways of bias generation and propagation.
arXiv Detail & Related papers (2024-07-03T05:19:45Z) - Towards Generalizable and Interpretable Motion Prediction: A Deep
Variational Bayes Approach [54.429396802848224]
This paper proposes an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases.
For interpretability, the model achieves the target-driven motion prediction by estimating the spatial distribution of long-term destinations.
Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable.
arXiv Detail & Related papers (2024-03-10T04:16:04Z) - FM-G-CAM: A Holistic Approach for Explainable AI in Computer Vision [0.6215404942415159]
We emphasise the need to understand predictions of Computer Vision models, specifically Convolutional Neural Network (CNN) based models.
Existing methods of explaining CNN predictions are mostly based on Gradient-weighted Class Activation Maps (Grad-CAM) and solely focus on a single target class.
We present an exhaustive methodology called Fused Multi-class Gradient-weighted Class Activation Map (FM-G-CAM) that considers multiple top predicted classes.
arXiv Detail & Related papers (2023-12-10T19:33:40Z) - Mitigating Bias for Question Answering Models by Tracking Bias Influence [84.66462028537475]
We propose BMBI, an approach to mitigate the bias of multiple-choice QA models.
Based on the intuition that a model would lean to be more biased if it learns from a biased example, we measure the bias level of a query instance.
We show that our method could be applied to multiple QA formulations across multiple bias categories.
arXiv Detail & Related papers (2023-10-13T00:49:09Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Debiased-CAM to mitigate systematic error with faithful visual
explanations of machine learning [10.819408603463426]
We present Debiased-CAM to recover explanation faithfulness across various bias types and levels.
In simulation studies, the approach not only enhanced prediction accuracy, but also generated highly faithful explanations.
arXiv Detail & Related papers (2022-01-30T14:42:21Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - BiaSwap: Removing dataset bias with bias-tailored swapping augmentation [20.149645246997668]
Deep neural networks often make decisions based on the spurious correlations inherent in the dataset, failing to generalize in an unbiased data distribution.
This paper proposes a novel bias-tailored augmentation-based approach, BiaSwap, for learning debiased representation without requiring supervision on the bias type.
arXiv Detail & Related papers (2021-08-23T08:35:26Z) - Use HiResCAM instead of Grad-CAM for faithful explanations of
convolutional neural networks [89.56292219019163]
Explanation methods facilitate the development of models that learn meaningful concepts and avoid exploiting spurious correlations.
We illustrate a previously unrecognized limitation of the popular neural network explanation method Grad-CAM.
We propose HiResCAM, a class-specific explanation method that is guaranteed to highlight only the locations the model used to make each prediction.
arXiv Detail & Related papers (2020-11-17T19:26:14Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.