Detecting Concept Drift With Neural Network Model Uncertainty
- URL: http://arxiv.org/abs/2107.01873v1
- Date: Mon, 5 Jul 2021 08:56:36 GMT
- Title: Detecting Concept Drift With Neural Network Model Uncertainty
- Authors: Lucas Baier, Tim Schl\"or, Jakob Sch\"offer, Niklas K\"uhl
- Abstract summary: Uncertainty Drift Detection (UDD) is able to detect drifts without access to true labels.
In contrast to input data-based drift detection, our approach considers the effects of the current input data on the properties of the prediction model.
We show that UDD outperforms other state-of-the-art strategies on two synthetic as well as ten real-world data sets for both regression and classification tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deployed machine learning models are confronted with the problem of changing
data over time, a phenomenon also called concept drift. While existing
approaches of concept drift detection already show convincing results, they
require true labels as a prerequisite for successful drift detection.
Especially in many real-world application scenarios-like the ones covered in
this work-true labels are scarce, and their acquisition is expensive.
Therefore, we introduce a new algorithm for drift detection, Uncertainty Drift
Detection (UDD), which is able to detect drifts without access to true labels.
Our approach is based on the uncertainty estimates provided by a deep neural
network in combination with Monte Carlo Dropout. Structural changes over time
are detected by applying the ADWIN technique on the uncertainty estimates, and
detected drifts trigger a retraining of the prediction model. In contrast to
input data-based drift detection, our approach considers the effects of the
current input data on the properties of the prediction model rather than
detecting change on the input data only (which can lead to unnecessary
retrainings). We show that UDD outperforms other state-of-the-art strategies on
two synthetic as well as ten real-world data sets for both regression and
classification tasks.
Related papers
- Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - A Neighbor-Searching Discrepancy-based Drift Detection Scheme for Learning Evolving Data [40.00357483768265]
This work presents a novel real concept drift detection method based on Neighbor-Searching Discrepancy.
The proposed method is able to detect real concept drift with high accuracy while ignoring virtual drift.
It can also indicate the direction of the classification boundary change by identifying the invasion or retreat of a certain class.
arXiv Detail & Related papers (2024-05-23T04:03:36Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - CADM: Confusion Model-based Detection Method for Real-drift in Chunk
Data Stream [3.0885191226198785]
Concept drift detection has attracted considerable attention due to its importance in many real-world applications such as health monitoring and fault diagnosis.
We propose a new approach to detect real-drift in the chunk data stream with limited annotations based on concept confusion.
arXiv Detail & Related papers (2023-03-25T08:59:27Z) - Watermarking for Out-of-distribution Detection [76.20630986010114]
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
We propose a general methodology named watermarking in this paper.
We learn a unified pattern that is superimposed onto features of original data, and the model's detection capability is largely boosted after watermarking.
arXiv Detail & Related papers (2022-10-27T06:12:32Z) - Autoregressive based Drift Detection Method [0.0]
We propose a new concept drift detection method based on autoregressive models called ADDM.
Our results show that this new concept drift detection method outperforms the state-of-the-art drift detection methods.
arXiv Detail & Related papers (2022-03-09T14:36:16Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Bayesian Autoencoders for Drift Detection in Industrial Environments [69.93875748095574]
Autoencoders are unsupervised models which have been used for detecting anomalies in multi-sensor environments.
Anomalies can come either from real changes in the environment (real drift) or from faulty sensory devices (virtual drift)
arXiv Detail & Related papers (2021-07-28T10:19:58Z) - Automatic Learning to Detect Concept Drift [40.69280758487987]
We propose Meta-ADD, a novel framework that learns to classify concept drift by tracking the changed pattern of error rates.
Specifically, in the training phase, we extract meta-features based on the error rates of various concept drift, after which a meta-detector is developed via prototypical neural network.
In the detection phase, the learned meta-detector is fine-tuned to adapt to the corresponding data stream via stream-based active learning.
arXiv Detail & Related papers (2021-05-04T11:10:39Z) - Adversarial Concept Drift Detection under Poisoning Attacks for Robust
Data Stream Mining [15.49323098362628]
We propose a framework for robust concept drift detection in the presence of adversarial and poisoning attacks.
We introduce the taxonomy for two types of adversarial concept drifts, as well as a robust trainable drift detector.
We also introduce Relative Loss of Robustness - a novel measure for evaluating the performance of concept drift detectors under poisoning attacks.
arXiv Detail & Related papers (2020-09-20T18:46:31Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.