Automatic Learning to Detect Concept Drift
- URL: http://arxiv.org/abs/2105.01419v1
- Date: Tue, 4 May 2021 11:10:39 GMT
- Title: Automatic Learning to Detect Concept Drift
- Authors: Hang Yu, Tianyu Liu, Jie Lu and Guangquan Zhang
- Abstract summary: We propose Meta-ADD, a novel framework that learns to classify concept drift by tracking the changed pattern of error rates.
Specifically, in the training phase, we extract meta-features based on the error rates of various concept drift, after which a meta-detector is developed via prototypical neural network.
In the detection phase, the learned meta-detector is fine-tuned to adapt to the corresponding data stream via stream-based active learning.
- Score: 40.69280758487987
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Many methods have been proposed to detect concept drift, i.e., the change in
the distribution of streaming data, due to concept drift causes a decrease in
the prediction accuracy of algorithms. However, the most of current detection
methods are based on the assessment of the degree of change in the data
distribution, cannot identify the type of concept drift. In this paper, we
propose Active Drift Detection with Meta learning (Meta-ADD), a novel framework
that learns to classify concept drift by tracking the changed pattern of error
rates. Specifically, in the training phase, we extract meta-features based on
the error rates of various concept drift, after which a meta-detector is
developed via a prototypical neural network by representing various concept
drift classes as corresponding prototypes. In the detection phase, the learned
meta-detector is fine-tuned to adapt to the corresponding data stream via
stream-based active learning. Hence, Meta-ADD uses machine learning to learn to
detect concept drifts and identify their types automatically, which can
directly support drift understand. The experiment results verify the
effectiveness of Meta-ADD.
Related papers
- A Neighbor-Searching Discrepancy-based Drift Detection Scheme for Learning Evolving Data [40.00357483768265]
This work presents a novel real concept drift detection method based on Neighbor-Searching Discrepancy.
The proposed method is able to detect real concept drift with high accuracy while ignoring virtual drift.
It can also indicate the direction of the classification boundary change by identifying the invasion or retreat of a certain class.
arXiv Detail & Related papers (2024-05-23T04:03:36Z) - Methods for Generating Drift in Text Streams [49.3179290313959]
Concept drift is a frequent phenomenon in real-world datasets and corresponds to changes in data distribution over time.
This paper provides four textual drift generation methods to ease the production of datasets with labeled drifts.
Results show that all methods have their performance degraded right after the drifts, and the incremental SVM is the fastest to run and recover the previous performance levels.
arXiv Detail & Related papers (2024-03-18T23:48:33Z) - What to Remember: Self-Adaptive Continual Learning for Audio Deepfake
Detection [53.063161380423715]
Existing detection models have shown remarkable success in discriminating known deepfake audio, but struggle when encountering new attack types.
We propose a continual learning approach called Radian Weight Modification (RWM) for audio deepfake detection.
arXiv Detail & Related papers (2023-12-15T09:52:17Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - CADM: Confusion Model-based Detection Method for Real-drift in Chunk
Data Stream [3.0885191226198785]
Concept drift detection has attracted considerable attention due to its importance in many real-world applications such as health monitoring and fault diagnosis.
We propose a new approach to detect real-drift in the chunk data stream with limited annotations based on concept confusion.
arXiv Detail & Related papers (2023-03-25T08:59:27Z) - Real-time Object Detection for Streaming Perception [84.2559631820007]
Streaming perception is proposed to jointly evaluate the latency and accuracy into a single metric for video online perception.
We build a simple and effective framework for streaming perception.
Our method achieves competitive performance on Argoverse-HD dataset and improves the AP by 4.9% compared to the strong baseline.
arXiv Detail & Related papers (2022-03-23T11:33:27Z) - Autoregressive based Drift Detection Method [0.0]
We propose a new concept drift detection method based on autoregressive models called ADDM.
Our results show that this new concept drift detection method outperforms the state-of-the-art drift detection methods.
arXiv Detail & Related papers (2022-03-09T14:36:16Z) - Detecting Concept Drift With Neural Network Model Uncertainty [0.0]
Uncertainty Drift Detection (UDD) is able to detect drifts without access to true labels.
In contrast to input data-based drift detection, our approach considers the effects of the current input data on the properties of the prediction model.
We show that UDD outperforms other state-of-the-art strategies on two synthetic as well as ten real-world data sets for both regression and classification tasks.
arXiv Detail & Related papers (2021-07-05T08:56:36Z) - Adversarial Concept Drift Detection under Poisoning Attacks for Robust
Data Stream Mining [15.49323098362628]
We propose a framework for robust concept drift detection in the presence of adversarial and poisoning attacks.
We introduce the taxonomy for two types of adversarial concept drifts, as well as a robust trainable drift detector.
We also introduce Relative Loss of Robustness - a novel measure for evaluating the performance of concept drift detectors under poisoning attacks.
arXiv Detail & Related papers (2020-09-20T18:46:31Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.