Feature Relevance Analysis to Explain Concept Drift -- A Case Study in
Human Activity Recognition
- URL: http://arxiv.org/abs/2301.08453v1
- Date: Fri, 20 Jan 2023 07:34:27 GMT
- Title: Feature Relevance Analysis to Explain Concept Drift -- A Case Study in
Human Activity Recognition
- Authors: Pekka Siirtola and Juha R\"oning
- Abstract summary: This article studies how to detect and explain concept drift.
Drift detection is based on identifying a set of features having the largest relevance difference between the drifting model and a model known to be accurate.
It is shown that feature relevance analysis cannot only be used to detect the concept drift but also to explain the reason for the drift.
- Score: 3.5569545396848437
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article studies how to detect and explain concept drift. Human activity
recognition is used as a case study together with a online batch learning
situation where the quality of the labels used in the model updating process
starts to decrease. Drift detection is based on identifying a set of features
having the largest relevance difference between the drifting model and a model
that is known to be accurate and monitoring how the relevance of these features
changes over time. As a main result of this article, it is shown that feature
relevance analysis cannot only be used to detect the concept drift but also to
explain the reason for the drift when a limited number of typical reasons for
the concept drift are predefined. To explain the reason for the concept drift,
it is studied how these predefined reasons effect to feature relevance. In
fact, it is shown that each of these has an unique effect to features relevance
and these can be used to explain the reason for concept drift.
Related papers
- Methods for Generating Drift in Text Streams [49.3179290313959]
Concept drift is a frequent phenomenon in real-world datasets and corresponds to changes in data distribution over time.
This paper provides four textual drift generation methods to ease the production of datasets with labeled drifts.
Results show that all methods have their performance degraded right after the drifts, and the incremental SVM is the fastest to run and recover the previous performance levels.
arXiv Detail & Related papers (2024-03-18T23:48:33Z) - Explaining Drift using Shapley Values [0.0]
Machine learning models often deteriorate in their performance when they are used to predict the outcomes over data on which they were not trained.
There is no framework to identify the drivers behind the drift in model performance.
We propose a novel framework - DBShap that uses principled Shapley values to identify the main contributors of the drift.
arXiv Detail & Related papers (2024-01-18T07:07:42Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Model Based Explanations of Concept Drift [8.686667049158476]
Concept drift refers to the phenomenon that the distribution generating the observed data changes over time.
If drift is present, machine learning models can become inaccurate and need adjustment.
We present a novel technology characterizing concept drift in terms of the characteristic change of spatial features.
arXiv Detail & Related papers (2023-03-16T14:03:56Z) - DOMINO: Visual Causal Reasoning with Time-Dependent Phenomena [59.291745595756346]
We propose a set of visual analytics methods that allow humans to participate in the discovery of causal relations associated with windows of time delay.
Specifically, we leverage a well-established method, logic-based causality, to enable analysts to test the significance of potential causes.
Since an effect can be a cause of other effects, we allow users to aggregate different temporal cause-effect relations found with our method into a visual flow diagram.
arXiv Detail & Related papers (2023-03-12T03:40:21Z) - Generic Temporal Reasoning with Differential Analysis and Explanation [61.96034987217583]
We introduce a novel task named TODAY that bridges the gap with temporal differential analysis.
TODAY evaluates whether systems can correctly understand the effect of incremental changes.
We show that TODAY's supervision style and explanation annotations can be used in joint learning.
arXiv Detail & Related papers (2022-12-20T17:40:03Z) - Are Concept Drift Detectors Reliable Alarming Systems? -- A Comparative
Study [6.7961908135481615]
Concept drift, also known as concept drift, impacts the performance of machine learning models.
In this study, we assess the reliability of concept drift detectors to identify drift in time.
Our findings aim to help practitioners understand which drift detector should be employed in different situations.
arXiv Detail & Related papers (2022-11-23T16:31:15Z) - Exploring Inconsistent Knowledge Distillation for Object Detection with
Data Augmentation [66.25738680429463]
Knowledge Distillation (KD) for object detection aims to train a compact detector by transferring knowledge from a teacher model.
We propose inconsistent knowledge distillation (IKD) which aims to distill knowledge inherent in the teacher model's counter-intuitive perceptions.
Our method outperforms state-of-the-art KD baselines on one-stage, two-stage and anchor-free object detectors.
arXiv Detail & Related papers (2022-09-20T16:36:28Z) - Analysis of Drifting Features [11.305591390070123]
concept drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time.
We distinguish between drift inducing features, for which the observed feature drift cannot be explained by any other feature, and faithfully drifting features, which correlate with the present drift of other features.
arXiv Detail & Related papers (2020-12-01T14:09:19Z) - Counterfactual Explanations of Concept Drift [11.53362411363005]
concept drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time.
We present a novel technology, which characterizes concept drift in terms of the characteristic change of spatial features represented by typical examples.
arXiv Detail & Related papers (2020-06-23T08:27:57Z) - Value-driven Hindsight Modelling [68.658900923595]
Value estimation is a critical component of the reinforcement learning (RL) paradigm.
Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function.
We develop an approach for representation learning in RL that sits in between these two extremes.
This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function.
arXiv Detail & Related papers (2020-02-19T18:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.