Adversarial Attacks for Drift Detection
- URL: http://arxiv.org/abs/2411.16591v1
- Date: Mon, 25 Nov 2024 17:25:00 GMT
- Title: Adversarial Attacks for Drift Detection
- Authors: Fabian Hinder, Valerie Vaquet, Barbara Hammer,
- Abstract summary: This work studies the shortcomings of commonly used drift detection schemes.
We show how to construct data streams that are drifting without being detected.
In particular, we compute all possible adversairals for common detection schemes.
- Score: 6.234802839923542
- License:
- Abstract: Concept drift refers to the change of data distributions over time. While drift poses a challenge for learning models, requiring their continual adaption, it is also relevant in system monitoring to detect malfunctions, system failures, and unexpected behavior. In the latter case, the robust and reliable detection of drifts is imperative. This work studies the shortcomings of commonly used drift detection schemes. We show how to construct data streams that are drifting without being detected. We refer to those as drift adversarials. In particular, we compute all possible adversairals for common detection schemes and underpin our theoretical findings with empirical evaluations.
Related papers
- Unsupervised Concept Drift Detection from Deep Learning Representations in Real-time [5.999777817331315]
Concept Drift is a phenomenon in which the underlying data distribution and statistical properties of a target domain change over time.
We propose DriftLens, an unsupervised real-time concept drift detection framework.
It works on unstructured data by exploiting the distribution distances of deep learning representations.
arXiv Detail & Related papers (2024-06-24T23:41:46Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Are Concept Drift Detectors Reliable Alarming Systems? -- A Comparative
Study [6.7961908135481615]
Concept drift, also known as concept drift, impacts the performance of machine learning models.
In this study, we assess the reliability of concept drift detectors to identify drift in time.
Our findings aim to help practitioners understand which drift detector should be employed in different situations.
arXiv Detail & Related papers (2022-11-23T16:31:15Z) - Ranking-Based Physics-Informed Line Failure Detection in Power Grids [66.0797334582536]
Real-time and accurate detecting of potential line failures is the first step to mitigating the extreme weather impact and activating emergency controls.
Power balance equations nonlinearity, increased uncertainty in generation during extreme events, and lack of grid observability compromise the efficiency of traditional data-driven failure detection methods.
This paper proposes a Physics-InformEd Line failure Detector (FIELD) that leverages grid topology information to reduce sample and time complexities and improve localization accuracy.
arXiv Detail & Related papers (2022-08-31T18:19:25Z) - Bayesian Autoencoders for Drift Detection in Industrial Environments [69.93875748095574]
Autoencoders are unsupervised models which have been used for detecting anomalies in multi-sensor environments.
Anomalies can come either from real changes in the environment (real drift) or from faulty sensory devices (virtual drift)
arXiv Detail & Related papers (2021-07-28T10:19:58Z) - Detecting Concept Drift With Neural Network Model Uncertainty [0.0]
Uncertainty Drift Detection (UDD) is able to detect drifts without access to true labels.
In contrast to input data-based drift detection, our approach considers the effects of the current input data on the properties of the prediction model.
We show that UDD outperforms other state-of-the-art strategies on two synthetic as well as ten real-world data sets for both regression and classification tasks.
arXiv Detail & Related papers (2021-07-05T08:56:36Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Adversarial Concept Drift Detection under Poisoning Attacks for Robust
Data Stream Mining [15.49323098362628]
We propose a framework for robust concept drift detection in the presence of adversarial and poisoning attacks.
We introduce the taxonomy for two types of adversarial concept drifts, as well as a robust trainable drift detector.
We also introduce Relative Loss of Robustness - a novel measure for evaluating the performance of concept drift detectors under poisoning attacks.
arXiv Detail & Related papers (2020-09-20T18:46:31Z) - Why Normalizing Flows Fail to Detect Out-of-Distribution Data [51.552870594221865]
Normalizing flows fail to distinguish between in- and out-of-distribution data.
We demonstrate that flows learn local pixel correlations and generic image-to-latent-space transformations.
We show that by modifying the architecture of flow coupling layers we can bias the flow towards learning the semantic structure of the target data.
arXiv Detail & Related papers (2020-06-15T17:00:01Z) - DriftSurf: A Risk-competitive Learning Algorithm under Concept Drift [12.579800289829963]
When learning from streaming data, a change in the data distribution, also known as concept drift, can render a previously-learned model inaccurate.
We present an adaptive learning algorithm that extends previous drift-detection-based methods by incorporating drift detection into a broader stable-state/reactive-state process.
The algorithm is generic in its base learner and can be applied across a variety of supervised learning problems.
arXiv Detail & Related papers (2020-03-13T23:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.