Assessment of the Reliablity of a Model's Decision by Generalizing
Attribution to the Wavelet Domain
- URL: http://arxiv.org/abs/2305.14979v5
- Date: Thu, 9 Nov 2023 13:07:22 GMT
- Title: Assessment of the Reliablity of a Model's Decision by Generalizing
Attribution to the Wavelet Domain
- Authors: Gabriel Kasmi and Laurent Dubus and Yves-Marie Saint Drenan and
Philippe Blanc
- Abstract summary: We introduce the Wavelet sCale Attribution Method (WCAM), a generalization of attribution from the pixel domain to the space-scale domain using wavelet transforms.
Our code is accessible here.
- Score: 0.8192907805418583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks have shown remarkable performance in computer vision, but
their deployment in numerous scientific and technical fields is challenging due
to their black-box nature. Scientists and practitioners need to evaluate the
reliability of a decision, i.e., to know simultaneously if a model relies on
the relevant features and whether these features are robust to image
corruptions. Existing attribution methods aim to provide human-understandable
explanations by highlighting important regions in the image domain, but fail to
fully characterize a decision process's reliability. To bridge this gap, we
introduce the Wavelet sCale Attribution Method (WCAM), a generalization of
attribution from the pixel domain to the space-scale domain using wavelet
transforms. Attribution in the wavelet domain reveals where and on what scales
the model focuses, thus enabling us to assess whether a decision is reliable.
Our code is accessible here:
\url{https://github.com/gabrielkasmi/spectral-attribution}.
Related papers
- One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability [6.151633954305939]
We propose leveraging the wavelet domain as a robust mathematical foundation for attribution.
Our approach extends the existing gradient-based feature attributions into the wavelet domain.
We show how our method explains not only the where -- the important parts of the input -- but also the what.
arXiv Detail & Related papers (2024-10-02T12:34:04Z) - Understanding the Dependence of Perception Model Competency on Regions in an Image [0.10923877073891446]
We show five methods for identifying regions in the input image contributing to low model competency.
We find that the competency gradients and reconstruction loss methods show great promise in identifying regions associated with low model competency.
arXiv Detail & Related papers (2024-07-15T08:50:13Z) - Domain Generalization for In-Orbit 6D Pose Estimation [14.624172952608653]
We introduce a novel, end-to-end, neural-based architecture for spacecraft pose estimation networks.
We demonstrate that our method effectively closes the domain gap, achieving state-of-the-art accuracy on the widespread SPEED+ dataset.
arXiv Detail & Related papers (2024-06-17T17:01:20Z) - Domain-Controlled Prompt Learning [49.45309818782329]
Existing prompt learning methods often lack domain-awareness or domain-transfer mechanisms.
We propose a textbfDomain-Controlled Prompt Learning for the specific domains.
Our method achieves state-of-the-art performance in specific domain image recognition datasets.
arXiv Detail & Related papers (2023-09-30T02:59:49Z) - DARE: Towards Robust Text Explanations in Biomedical and Healthcare
Applications [54.93807822347193]
We show how to adapt attribution robustness estimation methods to a given domain, so as to take into account domain-specific plausibility.
Next, we provide two methods, adversarial training and FAR training, to mitigate the brittleness characterized by DARE.
Finally, we empirically validate our methods with extensive experiments on three established biomedical benchmarks.
arXiv Detail & Related papers (2023-07-05T08:11:40Z) - Analyzing the Domain Shift Immunity of Deep Homography Estimation [1.4607247979144045]
CNN-driven homography estimation models show a distinctive immunity to domain shifts.
This study explores the resilience of a variety of deep homography estimation models to domain shifts.
arXiv Detail & Related papers (2023-04-19T21:28:31Z) - Beyond ImageNet Attack: Towards Crafting Adversarial Examples for
Black-box Domains [80.11169390071869]
Adversarial examples have posed a severe threat to deep neural networks due to their transferable nature.
We propose a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains.
Our methods outperform state-of-the-art approaches by up to 7.71% (towards coarse-grained domains) and 25.91% (towards fine-grained domains) on average.
arXiv Detail & Related papers (2022-01-27T14:04:27Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Explainable Deep Classification Models for Domain Generalization [94.43131722655617]
Explanations are defined as regions of visual evidence upon which a deep classification network makes a decision.
Our training strategy enforces a periodic saliency-based feedback to encourage the model to focus on the image regions that directly correspond to the ground-truth object.
arXiv Detail & Related papers (2020-03-13T22:22:15Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.