Model Based Explanations of Concept Drift
- URL: http://arxiv.org/abs/2303.09331v1
- Date: Thu, 16 Mar 2023 14:03:56 GMT
- Title: Model Based Explanations of Concept Drift
- Authors: Fabian Hinder, Valerie Vaquet, Johannes Brinkrolf, Barbara Hammer
- Abstract summary: Concept drift refers to the phenomenon that the distribution generating the observed data changes over time.
If drift is present, machine learning models can become inaccurate and need adjustment.
We present a novel technology characterizing concept drift in terms of the characteristic change of spatial features.
- Score: 8.686667049158476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The notion of concept drift refers to the phenomenon that the distribution
generating the observed data changes over time. If drift is present, machine
learning models can become inaccurate and need adjustment. While there do exist
methods to detect concept drift or to adjust models in the presence of observed
drift, the question of explaining drift, i.e., describing the potentially
complex and high dimensional change of distribution in a human-understandable
fashion, has hardly been considered so far. This problem is of importance since
it enables an inspection of the most prominent characteristics of how and where
drift manifests itself. Hence, it enables human understanding of the change and
it increases acceptance of life-long learning models. In this paper, we present
a novel technology characterizing concept drift in terms of the characteristic
change of spatial features based on various explanation techniques. To do so,
we propose a methodology to reduce the explanation of concept drift to an
explanation of models that are trained in a suitable way extracting relevant
information regarding the drift. This way a large variety of explanation
schemes is available. Thus, a suitable method can be selected for the problem
of drift explanation at hand. We outline the potential of this approach and
demonstrate its usefulness in several examples.
Related papers
- Online Drift Detection with Maximum Concept Discrepancy [13.48123472458282]
We propose MCD-DD, a novel concept drift detection method based on maximum concept discrepancy.
Our method can adaptively identify varying forms of concept drift by contrastive learning of concept embeddings.
arXiv Detail & Related papers (2024-07-07T13:57:50Z) - An Axiomatic Approach to Model-Agnostic Concept Explanations [67.84000759813435]
We propose an approach to concept explanations that satisfy three natural axioms: linearity, recursivity, and similarity.
We then establish connections with previous concept explanation methods, offering insight into their varying semantic meanings.
arXiv Detail & Related papers (2024-01-12T20:53:35Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Feature Relevance Analysis to Explain Concept Drift -- A Case Study in
Human Activity Recognition [3.5569545396848437]
This article studies how to detect and explain concept drift.
Drift detection is based on identifying a set of features having the largest relevance difference between the drifting model and a model known to be accurate.
It is shown that feature relevance analysis cannot only be used to detect the concept drift but also to explain the reason for the drift.
arXiv Detail & Related papers (2023-01-20T07:34:27Z) - On the Change of Decision Boundaries and Loss in Learning with Concept
Drift [8.686667049158476]
Concept drift refers to the phenomenon that the distribution generating the observed data changes over time.
Many technologies for learning with drift rely on the interleaved test-train error (ITTE) as a quantity which approximates the model generalization error.
arXiv Detail & Related papers (2022-12-02T14:58:13Z) - From Concept Drift to Model Degradation: An Overview on
Performance-Aware Drift Detectors [1.757501664210825]
Changes in the system on which a predictive machine learning model has been trained may lead to performance degradation during the system's life cycle.
Different terms have been used in the literature to refer to the same type of concept drift and the same term for various types.
This lack of unified terminology is set out to create confusion on distinguishing between different concept drift variants.
arXiv Detail & Related papers (2022-03-21T15:48:13Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Explainers in the Wild: Making Surrogate Explainers Robust to
Distortions through Perception [77.34726150561087]
We propose a methodology to evaluate the effect of distortions in explanations by embedding perceptual distances.
We generate explanations for images in the Imagenet-C dataset and demonstrate how using a perceptual distances in the surrogate explainer creates more coherent explanations for the distorted and reference images.
arXiv Detail & Related papers (2021-02-22T12:38:53Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Counterfactual Explanations of Concept Drift [11.53362411363005]
concept drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time.
We present a novel technology, which characterizes concept drift in terms of the characteristic change of spatial features represented by typical examples.
arXiv Detail & Related papers (2020-06-23T08:27:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.