Unsupervised Unlearning of Concept Drift with Autoencoders
- URL: http://arxiv.org/abs/2211.12989v2
- Date: Tue, 19 Sep 2023 11:23:32 GMT
- Title: Unsupervised Unlearning of Concept Drift with Autoencoders
- Authors: Andr\'e Artelt, Kleanthis Malialis, Christos Panayiotou, Marios
Polycarpou, Barbara Hammer
- Abstract summary: Concept drift refers to a change in the data distribution affecting the data stream of future samples.
This paper proposes an unsupervised and model-agnostic concept drift adaptation method at the global level.
- Score: 5.41354952642957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concept drift refers to a change in the data distribution affecting the data
stream of future samples. Consequently, learning models operating on the data
stream might become obsolete, and need costly and difficult adjustments such as
retraining or adaptation. Existing methods usually implement a local concept
drift adaptation scheme, where either incremental learning of the models is
used, or the models are completely retrained when a drift detection mechanism
triggers an alarm. This paper proposes an alternative approach in which an
unsupervised and model-agnostic concept drift adaptation method at the global
level is introduced, based on autoencoders. Specifically, the proposed method
aims to ``unlearn'' the concept drift without having to retrain or adapt any of
the learning models operating on the data. An extensive experimental evaluation
is conducted in two application domains. We consider a realistic water
distribution network with more than 30 models in-place, from which we create
200 simulated data sets / scenarios. We further consider an image-related task
to demonstrate the effectiveness of our method.
Related papers
- Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset [98.52916361979503]
We introduce a novel learning approach that automatically models and adapts to non-stationarity.
We show empirically that our approach performs well in non-stationary supervised and off-policy reinforcement learning settings.
arXiv Detail & Related papers (2024-11-06T16:32:40Z) - Distilled Datamodel with Reverse Gradient Matching [74.75248610868685]
We introduce an efficient framework for assessing data impact, comprising offline training and online evaluation stages.
Our proposed method achieves comparable model behavior evaluation while significantly speeding up the process compared to the direct retraining method.
arXiv Detail & Related papers (2024-04-22T09:16:14Z) - Liquid Neural Network-based Adaptive Learning vs. Incremental Learning for Link Load Prediction amid Concept Drift due to Network Failures [37.66676003679306]
Adapting to concept drift is a challenging task in machine learning.
In communication networks, such issue emerges when performing traffic forecasting following afailure event.
We propose an approach that exploits adaptive learning algorithms, namely, liquid neural networks, which are capable of self-adaptation to abrupt changes in data patterns without requiring any retraining.
arXiv Detail & Related papers (2024-04-08T08:47:46Z) - MORPH: Towards Automated Concept Drift Adaptation for Malware Detection [0.7499722271664147]
Concept drift is a significant challenge for malware detection.
Self-training has emerged as a promising approach to mitigate concept drift.
We propose MORPH -- an effective pseudo-label-based concept drift adaptation method.
arXiv Detail & Related papers (2024-01-23T14:25:43Z) - Unsupervised Discovery of Interpretable Directions in h-space of
Pre-trained Diffusion Models [63.1637853118899]
We propose the first unsupervised and learning-based method to identify interpretable directions in h-space of pre-trained diffusion models.
We employ a shift control module that works on h-space of pre-trained diffusion models to manipulate a sample into a shifted version of itself.
By jointly optimizing them, the model will spontaneously discover disentangled and interpretable directions.
arXiv Detail & Related papers (2023-10-15T18:44:30Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Autoregressive based Drift Detection Method [0.0]
We propose a new concept drift detection method based on autoregressive models called ADDM.
Our results show that this new concept drift detection method outperforms the state-of-the-art drift detection methods.
arXiv Detail & Related papers (2022-03-09T14:36:16Z) - Employing chunk size adaptation to overcome concept drift [2.277447144331876]
We propose a new Chunk Adaptive Restoration framework that can be adapted to any block-based data stream classification algorithm.
The proposed algorithm adjusts the data chunk size in the case of concept drift detection to minimize the impact of the change on the predictive performance of the used model.
arXiv Detail & Related papers (2021-10-25T12:36:22Z) - Asynchronous Federated Learning for Sensor Data with Concept Drift [17.390098048134195]
Federated learning (FL) involves multiple distributed devices jointly training a shared model.
Most of previous FL approaches assume that data on devices are fixed and stationary during the training process.
concept drift makes the learning process complicated because of the inconsistency between existing and upcoming data.
We propose a novel approach, FedConD, to detect and deal with the concept drift on local devices.
arXiv Detail & Related papers (2021-09-01T02:06:42Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.