Comprehensive Attribution: Inherently Explainable Vision Model with Feature Detector
- URL: http://arxiv.org/abs/2407.19308v2
- Date: Tue, 6 Aug 2024 17:22:17 GMT
- Title: Comprehensive Attribution: Inherently Explainable Vision Model with Feature Detector
- Authors: Xianren Zhang, Dongwon Lee, Suhang Wang,
- Abstract summary: Inherently explainable attribution method aims to enhance the understanding of model behavior.
It is achieved by cooperatively training a selector (generating an attribution map to identify important features) and a predictor.
We introduce a new objective that discourages the presence of discriminative features in the masked-out regions.
Our model makes accurate predictions with higher accuracy than the regular black-box model.
- Score: 30.23453108681447
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As deep vision models' popularity rapidly increases, there is a growing emphasis on explanations for model predictions. The inherently explainable attribution method aims to enhance the understanding of model behavior by identifying the important regions in images that significantly contribute to predictions. It is achieved by cooperatively training a selector (generating an attribution map to identify important features) and a predictor (making predictions using the identified features). Despite many advancements, existing methods suffer from the incompleteness problem, where discriminative features are masked out, and the interlocking problem, where the non-optimized selector initially selects noise, causing the predictor to fit on this noise and perpetuate the cycle. To address these problems, we introduce a new objective that discourages the presence of discriminative features in the masked-out regions thus enhancing the comprehensiveness of feature selection. A pre-trained detector is introduced to detect discriminative features in the masked-out region. If the selector selects noise instead of discriminative features, the detector can observe and break the interlocking situation by penalizing the selector. Extensive experiments show that our model makes accurate predictions with higher accuracy than the regular black-box model, and produces attribution maps with high feature coverage, localization ability, fidelity and robustness. Our code will be available at \href{https://github.com/Zood123/COMET}{https://github.com/Zood123/COMET}.
Related papers
- Generative Edge Detection with Stable Diffusion [52.870631376660924]
Edge detection is typically viewed as a pixel-level classification problem mainly addressed by discriminative methods.
We propose a novel approach, named Generative Edge Detector (GED), by fully utilizing the potential of the pre-trained stable diffusion model.
We conduct extensive experiments on multiple datasets and achieve competitive performance.
arXiv Detail & Related papers (2024-10-04T01:52:23Z) - Causal Feature Selection via Transfer Entropy [59.999594949050596]
Causal discovery aims to identify causal relationships between features with observational data.
We introduce a new causal feature selection approach that relies on the forward and backward feature selection procedures.
We provide theoretical guarantees on the regression and classification errors for both the exact and the finite-sample cases.
arXiv Detail & Related papers (2023-10-17T08:04:45Z) - Rationalizing Predictions by Adversarial Information Calibration [65.19407304154177]
We train two models jointly: one is a typical neural model that solves the task at hand in an accurate but black-box manner, and the other is a selector-predictor model that additionally produces a rationale for its prediction.
We use an adversarial technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features.
arXiv Detail & Related papers (2023-01-15T03:13:09Z) - Shortcomings of Top-Down Randomization-Based Sanity Checks for
Evaluations of Deep Neural Network Explanations [67.40641255908443]
We identify limitations of model-randomization-based sanity checks for the purpose of evaluating explanations.
Top-down model randomization preserves scales of forward pass activations with high probability.
arXiv Detail & Related papers (2022-11-22T18:52:38Z) - Understanding Interlocking Dynamics of Cooperative Rationalization [90.6863969334526]
Selective rationalization explains the prediction of complex neural networks by finding a small subset of the input that is sufficient to predict the neural model output.
We reveal a major problem with such cooperative rationalization paradigm -- model interlocking.
We propose a new rationalization framework, called A2R, which introduces a third component into the architecture, a predictor driven by soft attention as opposed to selection.
arXiv Detail & Related papers (2021-10-26T17:39:18Z) - Less is More: Feature Selection for Adversarial Robustness with
Compressive Counter-Adversarial Attacks [7.5320132424481505]
We propose a novel approach to identify the important features by employing counter-adrial attacks.
We show that there exist a subset of features, classification based on which bridge the gap between the clean and robust accuracy.
We then select features by observing the consistency of the activation values at the penultimate layer.
arXiv Detail & Related papers (2021-06-18T17:39:05Z) - Bayesian Attention Belief Networks [59.183311769616466]
Attention-based neural networks have achieved state-of-the-art results on a wide range of tasks.
This paper introduces Bayesian attention belief networks, which construct a decoder network by modeling unnormalized attention weights.
We show that our method outperforms deterministic attention and state-of-the-art attention in accuracy, uncertainty estimation, generalization across domains, and adversarial attacks.
arXiv Detail & Related papers (2021-06-09T17:46:22Z) - Learning from the Best: Rationalizing Prediction by Adversarial
Information Calibration [39.685626118667074]
We train two models jointly: one is a typical neural model that solves the task at hand in an accurate but black-box manner, and the other is a selector-predictor model that additionally produces a rationale for its prediction.
We use an adversarial-based technique to calibrate the information extracted by the two models.
For natural language tasks, we propose to use a language-model-based regularizer to encourage the extraction of fluent rationales.
arXiv Detail & Related papers (2020-12-16T11:54:15Z) - Dynamic Bayesian Approach for decision-making in Ego-Things [8.577234269009042]
This paper presents a novel approach to detect abnormalities in dynamic systems based on multisensory data and feature selection.
Growing neural gas (GNG) is employed for clustering multisensory data into a set of nodes.
Our method uses a Markov Jump particle filter (MJPF) for state estimation and abnormality detection.
arXiv Detail & Related papers (2020-10-28T11:38:51Z) - Differentiable Unsupervised Feature Selection based on a Gated Laplacian [7.970954821067042]
We propose a differentiable loss function that combines the Laplacian score, which favors low-frequency features, with a gating mechanism for feature selection.
We mathematically motivate the proposed approach and demonstrate that in the high noise regime, it is crucial to compute the Laplacian on the gated inputs, rather than on the full feature set.
arXiv Detail & Related papers (2020-07-09T11:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.