Towards Explainable Land Cover Mapping: a Counterfactual-based Strategy
- URL: http://arxiv.org/abs/2301.01520v2
- Date: Sun, 20 Aug 2023 20:37:31 GMT
- Title: Towards Explainable Land Cover Mapping: a Counterfactual-based Strategy
- Authors: Cassio F. Dantas, Diego Marcos, Dino Ienco
- Abstract summary: We propose a generative adversarial counterfactual approach for satellite image time series in a multi-class setting for the land cover classification task.
One of the distinctive features of the proposed approach is the lack of prior assumption on the targeted class for a given counterfactual explanation.
- Score: 9.180712157534606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Counterfactual explanations are an emerging tool to enhance interpretability
of deep learning models. Given a sample, these methods seek to find and display
to the user similar samples across the decision boundary. In this paper, we
propose a generative adversarial counterfactual approach for satellite image
time series in a multi-class setting for the land cover classification task.
One of the distinctive features of the proposed approach is the lack of prior
assumption on the targeted class for a given counterfactual explanation. This
inherent flexibility allows for the discovery of interesting information on the
relationship between land cover classes. The other feature consists of
encouraging the counterfactual to differ from the original sample only in a
small and compact temporal segment. These time-contiguous perturbations allow
for a much sparser and, thus, interpretable solution. Furthermore,
plausibility/realism of the generated counterfactual explanations is enforced
via the proposed adversarial learning strategy.
Related papers
- From Visual Explanations to Counterfactual Explanations with Latent Diffusion [11.433402357922414]
We propose a new approach to tackle two key challenges in recent prominent works.
First, we determine which specific counterfactual features are crucial for distinguishing the "concept" of the target class from the original class.
Second, we provide valuable explanations for the non-robust classifier without relying on the support of an adversarially robust model.
arXiv Detail & Related papers (2025-04-12T13:04:00Z) - Rethinking Distance Metrics for Counterfactual Explainability [53.436414009687]
We investigate a framing for counterfactual generation methods that considers counterfactuals not as independent draws from a region around the reference, but as jointly sampled with the reference from the underlying data distribution.
We derive a distance metric, tailored for counterfactual similarity that can be applied to a broad range of settings.
arXiv Detail & Related papers (2024-10-18T15:06:50Z) - Learning Discriminative Spatio-temporal Representations for Semi-supervised Action Recognition [23.44320273156057]
We propose an Adaptive Contrastive Learning(ACL) strategy and a Multi-scale Temporal Learning(MTL) strategy.
ACL strategy assesses the confidence of all unlabeled samples by the class prototypes of the labeled data, and adaptively selects positive-negative samples from a pseudo-labeled sample bank to construct contrastive learning.
MTL strategy could highlight informative semantics from long-term clips and integrate them into the short-term clip while suppressing noisy information.
arXiv Detail & Related papers (2024-04-25T08:49:08Z) - Adversarial Counterfactual Visual Explanations [0.7366405857677227]
This paper proposes an elegant method to turn adversarial attacks into semantically meaningful perturbations.
The proposed approach hypothesizes that Denoising Diffusion Probabilistic Models are excellent regularizers for avoiding high-frequency and out-of-distribution perturbations.
arXiv Detail & Related papers (2023-03-17T13:34:38Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Counterfactual Explanations via Latent Space Projection and
Interpolation [0.0]
We introduce SharpShooter, a method for binary classification that starts by creating a projected version of the input that classifies as the target class.
We then demonstrate that our framework translates core characteristics of a sample to its counterfactual through the use of learned representations.
arXiv Detail & Related papers (2021-12-02T00:07:49Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - A Low Rank Promoting Prior for Unsupervised Contrastive Learning [108.91406719395417]
We construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning.
Our hypothesis explicitly requires that all the samples belonging to the same instance class lie on the same subspace with small dimension.
Empirical evidences show that the proposed algorithm clearly surpasses the state-of-the-art approaches on multiple benchmarks.
arXiv Detail & Related papers (2021-08-05T15:58:25Z) - Where and What? Examining Interpretable Disentangled Representations [96.32813624341833]
Capturing interpretable variations has long been one of the goals in disentanglement learning.
Unlike the independence assumption, interpretability has rarely been exploited to encourage disentanglement in the unsupervised setting.
In this paper, we examine the interpretability of disentangled representations by investigating two questions: where to be interpreted and what to be interpreted.
arXiv Detail & Related papers (2021-04-07T11:22:02Z) - LIMEtree: Consistent and Faithful Surrogate Explanations of Multiple Classes [7.031336702345381]
We introduce the novel paradigm of multi-class explanations.
We propose a local surrogate model based on multi-output regression trees -- called LIMEtree.
On top of strong fidelity guarantees, our implementation delivers a range of diverse explanation types.
arXiv Detail & Related papers (2020-05-04T12:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.