Debiased-CAM to mitigate systematic error with faithful visual
explanations of machine learning
- URL: http://arxiv.org/abs/2201.12835v1
- Date: Sun, 30 Jan 2022 14:42:21 GMT
- Title: Debiased-CAM to mitigate systematic error with faithful visual
explanations of machine learning
- Authors: Wencan Zhang, Mariella Dimiccoli, Brian Y. Lim
- Abstract summary: We present Debiased-CAM to recover explanation faithfulness across various bias types and levels.
In simulation studies, the approach not only enhanced prediction accuracy, but also generated highly faithful explanations.
- Score: 10.819408603463426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model explanations such as saliency maps can improve user trust in AI by
highlighting important features for a prediction. However, these become
distorted and misleading when explaining predictions of images that are subject
to systematic error (bias). Furthermore, the distortions persist despite model
fine-tuning on images biased by different factors (blur, color temperature,
day/night). We present Debiased-CAM to recover explanation faithfulness across
various bias types and levels by training a multi-input, multi-task model with
auxiliary tasks for explanation and bias level predictions. In simulation
studies, the approach not only enhanced prediction accuracy, but also generated
highly faithful explanations about these predictions as if the images were
unbiased. In user studies, debiased explanations improved user task
performance, perceived truthfulness and perceived helpfulness. Debiased
training can provide a versatile platform for robust performance and
explanation faithfulness for a wide range of applications with data biases.
Related papers
- Unintended Bias in 2D+ Image Segmentation and Its Effect on Attention Asymmetry [0.37240490024629924]
Supervised pretrained models have become widely used in deep learning, especially for image segmentation tasks.<n>However, when applied to specialized datasets such as biomedical imaging, pretrained weights often introduce unintended biases.<n>In this study, we investigate the effects of these biases and propose strategies to mitigate them.
arXiv Detail & Related papers (2025-05-20T09:11:53Z) - Images Speak Louder than Words: Understanding and Mitigating Bias in Vision-Language Model from a Causal Mediation Perspective [13.486497323758226]
Vision-language models pre-trained on extensive datasets can inadvertently learn biases by correlating gender information with objects or scenarios.
We propose a framework that incorporates causal mediation analysis to measure and map the pathways of bias generation and propagation.
arXiv Detail & Related papers (2024-07-03T05:19:45Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Improving Bias Mitigation through Bias Experts in Natural Language
Understanding [10.363406065066538]
We propose a new debiasing framework that introduces binary classifiers between the auxiliary model and the main model.
Our proposed strategy improves the bias identification ability of the auxiliary model.
arXiv Detail & Related papers (2023-12-06T16:15:00Z) - Debiasing Multimodal Models via Causal Information Minimization [65.23982806840182]
We study bias arising from confounders in a causal graph for multimodal data.
Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data.
We use these features as confounder representations and use them via methods motivated by causal theory to remove bias from models.
arXiv Detail & Related papers (2023-11-28T16:46:14Z) - Counterfactual Augmentation for Multimodal Learning Under Presentation
Bias [48.372326930638025]
In machine learning systems, feedback loops between users and models can bias future user behavior, inducing a presentation bias in labels.
We propose counterfactual augmentation, a novel causal method for correcting presentation bias using generated counterfactual labels.
arXiv Detail & Related papers (2023-05-23T14:09:47Z) - Generalizability Analysis of Graph-based Trajectory Predictor with
Vectorized Representation [29.623692599892365]
Trajectory prediction is one of the essential tasks for autonomous vehicles.
Recent progress in machine learning gave birth to a series of advanced trajectory prediction algorithms.
arXiv Detail & Related papers (2022-08-06T20:19:52Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Visual Recognition with Deep Learning from Biased Image Datasets [6.10183951877597]
We show how biasing models can be applied to remedy problems in the context of visual recognition.
Based on the (approximate) knowledge of the biasing mechanisms at work, our approach consists in reweighting the observations.
We propose to use a low dimensional image representation, shared across the image databases.
arXiv Detail & Related papers (2021-09-06T10:56:58Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Debiased-CAM for bias-agnostic faithful visual explanations of deep
convolutional networks [10.403206672504664]
Class activation maps (CAMs) explain convolutional neural network predictions by identifying salient pixels.
CAM explanations become more deviated and unfaithful with increased image bias.
We present Debiased-CAM to recover explanation faithfulness across various bias types and levels.
arXiv Detail & Related papers (2020-12-10T10:28:47Z) - Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
Prediction [49.254162397086006]
We study explanations based on visual saliency in an image-based age prediction task.
We find that presenting model predictions improves human accuracy.
However, explanations of various kinds fail to significantly alter human accuracy or trust in the model.
arXiv Detail & Related papers (2020-07-23T20:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.