STUDD: A Student-Teacher Method for Unsupervised Concept Drift Detection
- URL: http://arxiv.org/abs/2103.00903v1
- Date: Mon, 1 Mar 2021 10:51:09 GMT
- Title: STUDD: A Student-Teacher Method for Unsupervised Concept Drift Detection
- Authors: Vitor Cerqueira, Heitor Murilo Gomes, Albert Bifet, Luis Torgo
- Abstract summary: We propose a novel approach to concept drift detection based on a student-teacher learning paradigm.
At run-time, our approach is to use the teacher for predicting new instances and monitoring the mimicking loss of the student for concept drift detection.
In a set of experiments using 19 data streams, we show that the proposed approach can detect concept drift and present a competitive behaviour.
- Score: 10.326887191803275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concept drift detection is a crucial task in data stream evolving
environments. Most of state of the art approaches designed to tackle this
problem monitor the loss of predictive models. However, this approach falls
short in many real-world scenarios, where the true labels are not readily
available to compute the loss. In this context, there is increasing attention
to approaches that perform concept drift detection in an unsupervised manner,
i.e., without access to the true labels. We propose a novel approach to
unsupervised concept drift detection based on a student-teacher learning
paradigm. Essentially, we create an auxiliary model (student) to mimic the
behaviour of the primary model (teacher). At run-time, our approach is to use
the teacher for predicting new instances and monitoring the mimicking loss of
the student for concept drift detection. In a set of experiments using 19 data
streams, we show that the proposed approach can detect concept drift and
present a competitive behaviour relative to the state of the art approaches.
Related papers
- DriftGAN: Using historical data for Unsupervised Recurring Drift Detection [0.6358693097475243]
In real-world applications, input data distributions are rarely static over a period of time, a phenomenon known as concept drift.
Most concept drift detection methods work on detecting concept drifts and signalling the requirement to retrain the model.
We present an unsupervised method based on Generative Adversarial Networks(GAN) to detect concept drifts and identify whether a specific concept drift occurred in the past.
arXiv Detail & Related papers (2024-07-09T04:38:44Z) - ACTRESS: Active Retraining for Semi-supervised Visual Grounding [52.08834188447851]
A previous study, RefTeacher, makes the first attempt to tackle this task by adopting the teacher-student framework to provide pseudo confidence supervision and attention-based supervision.
This approach is incompatible with current state-of-the-art visual grounding models, which follow the Transformer-based pipeline.
Our paper proposes the ACTive REtraining approach for Semi-Supervised Visual Grounding, abbreviated as ACTRESS.
arXiv Detail & Related papers (2024-07-03T16:33:31Z) - Decoupling the Class Label and the Target Concept in Machine Unlearning [81.69857244976123]
Machine unlearning aims to adjust a trained model to approximate a retrained one that excludes a portion of training data.
Previous studies showed that class-wise unlearning is successful in forgetting the knowledge of a target class.
We propose a general framework, namely, TARget-aware Forgetting (TARF)
arXiv Detail & Related papers (2024-06-12T14:53:30Z) - MORPH: Towards Automated Concept Drift Adaptation for Malware Detection [0.7499722271664147]
Concept drift is a significant challenge for malware detection.
Self-training has emerged as a promising approach to mitigate concept drift.
We propose MORPH -- an effective pseudo-label-based concept drift adaptation method.
arXiv Detail & Related papers (2024-01-23T14:25:43Z) - DriveAdapter: Breaking the Coupling Barrier of Perception and Planning
in End-to-End Autonomous Driving [64.57963116462757]
State-of-the-art methods usually follow the Teacher-Student' paradigm.
Student model only has access to raw sensor data and conducts behavior cloning on the data collected by the teacher model.
We propose DriveAdapter, which employs adapters with the feature alignment objective function between the student (perception) and teacher (planning) modules.
arXiv Detail & Related papers (2023-08-01T09:21:53Z) - When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning [83.8758881342346]
A novel loss function is devised to generate adversarial perturbations that could achieve both visual and measure imperceptibility.
Experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-$k$ multi-label systems.
arXiv Detail & Related papers (2023-07-27T13:18:47Z) - Unsupervised Unlearning of Concept Drift with Autoencoders [5.41354952642957]
Concept drift refers to a change in the data distribution affecting the data stream of future samples.
This paper proposes an unsupervised and model-agnostic concept drift adaptation method at the global level.
arXiv Detail & Related papers (2022-11-23T14:52:49Z) - Autoregressive based Drift Detection Method [0.0]
We propose a new concept drift detection method based on autoregressive models called ADDM.
Our results show that this new concept drift detection method outperforms the state-of-the-art drift detection methods.
arXiv Detail & Related papers (2022-03-09T14:36:16Z) - A Low Rank Promoting Prior for Unsupervised Contrastive Learning [108.91406719395417]
We construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning.
Our hypothesis explicitly requires that all the samples belonging to the same instance class lie on the same subspace with small dimension.
Empirical evidences show that the proposed algorithm clearly surpasses the state-of-the-art approaches on multiple benchmarks.
arXiv Detail & Related papers (2021-08-05T15:58:25Z) - Concept drift detection and adaptation for federated and continual
learning [55.41644538483948]
Smart devices can collect vast amounts of data from their environment.
This data is suitable for training machine learning models, which can significantly improve their behavior.
In this work, we present a new method, called Concept-Drift-Aware Federated Averaging.
arXiv Detail & Related papers (2021-05-27T17:01:58Z) - Automatic Learning to Detect Concept Drift [40.69280758487987]
We propose Meta-ADD, a novel framework that learns to classify concept drift by tracking the changed pattern of error rates.
Specifically, in the training phase, we extract meta-features based on the error rates of various concept drift, after which a meta-detector is developed via prototypical neural network.
In the detection phase, the learned meta-detector is fine-tuned to adapt to the corresponding data stream via stream-based active learning.
arXiv Detail & Related papers (2021-05-04T11:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.