DriftSurf: A Risk-competitive Learning Algorithm under Concept Drift
- URL: http://arxiv.org/abs/2003.06508v2
- Date: Sun, 2 Aug 2020 15:05:33 GMT
- Title: DriftSurf: A Risk-competitive Learning Algorithm under Concept Drift
- Authors: Ashraf Tahmasbi, Ellango Jothimurugesan, Srikanta Tirthapura, Phillip
B. Gibbons
- Abstract summary: When learning from streaming data, a change in the data distribution, also known as concept drift, can render a previously-learned model inaccurate.
We present an adaptive learning algorithm that extends previous drift-detection-based methods by incorporating drift detection into a broader stable-state/reactive-state process.
The algorithm is generic in its base learner and can be applied across a variety of supervised learning problems.
- Score: 12.579800289829963
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When learning from streaming data, a change in the data distribution, also
known as concept drift, can render a previously-learned model inaccurate and
require training a new model. We present an adaptive learning algorithm that
extends previous drift-detection-based methods by incorporating drift detection
into a broader stable-state/reactive-state process. The advantage of our
approach is that we can use aggressive drift detection in the stable state to
achieve a high detection rate, but mitigate the false positive rate of
standalone drift detection via a reactive state that reacts quickly to true
drifts while eliminating most false positives. The algorithm is generic in its
base learner and can be applied across a variety of supervised learning
problems. Our theoretical analysis shows that the risk of the algorithm is
competitive to an algorithm with oracle knowledge of when (abrupt) drifts
occur. Experiments on synthetic and real datasets with concept drifts confirm
our theoretical analysis.
Related papers
- Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset [98.52916361979503]
We introduce a novel learning approach that automatically models and adapts to non-stationarity.
We show empirically that our approach performs well in non-stationary supervised and off-policy reinforcement learning settings.
arXiv Detail & Related papers (2024-11-06T16:32:40Z) - DriftGAN: Using historical data for Unsupervised Recurring Drift Detection [0.6358693097475243]
In real-world applications, input data distributions are rarely static over a period of time, a phenomenon known as concept drift.
Most concept drift detection methods work on detecting concept drifts and signalling the requirement to retrain the model.
We present an unsupervised method based on Generative Adversarial Networks(GAN) to detect concept drifts and identify whether a specific concept drift occurred in the past.
arXiv Detail & Related papers (2024-07-09T04:38:44Z) - Unsupervised Concept Drift Detection from Deep Learning Representations in Real-time [5.999777817331315]
Concept Drift is a phenomenon in which the underlying data distribution and statistical properties of a target domain change over time.
We propose DriftLens, an unsupervised real-time concept drift detection framework.
It works on unstructured data by exploiting the distribution distances of deep learning representations.
arXiv Detail & Related papers (2024-06-24T23:41:46Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Autoregressive based Drift Detection Method [0.0]
We propose a new concept drift detection method based on autoregressive models called ADDM.
Our results show that this new concept drift detection method outperforms the state-of-the-art drift detection methods.
arXiv Detail & Related papers (2022-03-09T14:36:16Z) - Detecting Concept Drift With Neural Network Model Uncertainty [0.0]
Uncertainty Drift Detection (UDD) is able to detect drifts without access to true labels.
In contrast to input data-based drift detection, our approach considers the effects of the current input data on the properties of the prediction model.
We show that UDD outperforms other state-of-the-art strategies on two synthetic as well as ten real-world data sets for both regression and classification tasks.
arXiv Detail & Related papers (2021-07-05T08:56:36Z) - Out-of-Distribution Detection for Automotive Perception [58.34808836642603]
Neural networks (NNs) are widely used for object classification in autonomous driving.
NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data.
This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference.
arXiv Detail & Related papers (2020-11-03T01:46:35Z) - Adversarial Concept Drift Detection under Poisoning Attacks for Robust
Data Stream Mining [15.49323098362628]
We propose a framework for robust concept drift detection in the presence of adversarial and poisoning attacks.
We introduce the taxonomy for two types of adversarial concept drifts, as well as a robust trainable drift detector.
We also introduce Relative Loss of Robustness - a novel measure for evaluating the performance of concept drift detectors under poisoning attacks.
arXiv Detail & Related papers (2020-09-20T18:46:31Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.