Counterfactual Explanations of Concept Drift
- URL: http://arxiv.org/abs/2006.12822v1
- Date: Tue, 23 Jun 2020 08:27:57 GMT
- Title: Counterfactual Explanations of Concept Drift
- Authors: Fabian Hinder, Barbara Hammer
- Abstract summary: concept drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time.
We present a novel technology, which characterizes concept drift in terms of the characteristic change of spatial features represented by typical examples.
- Score: 11.53362411363005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The notion of concept drift refers to the phenomenon that the distribution,
which is underlying the observed data, changes over time; as a consequence
machine learning models may become inaccurate and need adjustment. While there
do exist methods to detect concept drift or to adjust models in the presence of
observed drift, the question of explaining drift has hardly been considered so
far. This problem is of importance, since it enables an inspection of the most
prominent features where drift manifests itself; hence it enables human
understanding of the necessity of change and it increases acceptance of
life-long learning models. In this paper we present a novel technology, which
characterizes concept drift in terms of the characteristic change of spatial
features represented by typical examples based on counterfactual explanations.
We establish a formal definition of this problem, derive an efficient
algorithmic solution based on counterfactual explanations, and demonstrate its
usefulness in several examples.
Related papers
- Online Drift Detection with Maximum Concept Discrepancy [13.48123472458282]
We propose MCD-DD, a novel concept drift detection method based on maximum concept discrepancy.
Our method can adaptively identify varying forms of concept drift by contrastive learning of concept embeddings.
arXiv Detail & Related papers (2024-07-07T13:57:50Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Model Based Explanations of Concept Drift [8.686667049158476]
Concept drift refers to the phenomenon that the distribution generating the observed data changes over time.
If drift is present, machine learning models can become inaccurate and need adjustment.
We present a novel technology characterizing concept drift in terms of the characteristic change of spatial features.
arXiv Detail & Related papers (2023-03-16T14:03:56Z) - Feature Relevance Analysis to Explain Concept Drift -- A Case Study in
Human Activity Recognition [3.5569545396848437]
This article studies how to detect and explain concept drift.
Drift detection is based on identifying a set of features having the largest relevance difference between the drifting model and a model known to be accurate.
It is shown that feature relevance analysis cannot only be used to detect the concept drift but also to explain the reason for the drift.
arXiv Detail & Related papers (2023-01-20T07:34:27Z) - On the Change of Decision Boundaries and Loss in Learning with Concept
Drift [8.686667049158476]
Concept drift refers to the phenomenon that the distribution generating the observed data changes over time.
Many technologies for learning with drift rely on the interleaved test-train error (ITTE) as a quantity which approximates the model generalization error.
arXiv Detail & Related papers (2022-12-02T14:58:13Z) - Change Detection for Local Explainability in Evolving Data Streams [72.4816340552763]
Local feature attribution methods have become a popular technique for post-hoc and model-agnostic explanations.
It is often unclear how local attributions behave in realistic, constantly evolving settings such as streaming and online applications.
We present CDLEEDS, a flexible and model-agnostic framework for detecting local change and concept drift.
arXiv Detail & Related papers (2022-09-06T18:38:34Z) - From Concept Drift to Model Degradation: An Overview on
Performance-Aware Drift Detectors [1.757501664210825]
Changes in the system on which a predictive machine learning model has been trained may lead to performance degradation during the system's life cycle.
Different terms have been used in the literature to refer to the same type of concept drift and the same term for various types.
This lack of unified terminology is set out to create confusion on distinguishing between different concept drift variants.
arXiv Detail & Related papers (2022-03-21T15:48:13Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Explainers in the Wild: Making Surrogate Explainers Robust to
Distortions through Perception [77.34726150561087]
We propose a methodology to evaluate the effect of distortions in explanations by embedding perceptual distances.
We generate explanations for images in the Imagenet-C dataset and demonstrate how using a perceptual distances in the surrogate explainer creates more coherent explanations for the distorted and reference images.
arXiv Detail & Related papers (2021-02-22T12:38:53Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.