Adversarial Concept Drift Detection under Poisoning Attacks for Robust
Data Stream Mining
- URL: http://arxiv.org/abs/2009.09497v1
- Date: Sun, 20 Sep 2020 18:46:31 GMT
- Title: Adversarial Concept Drift Detection under Poisoning Attacks for Robust
Data Stream Mining
- Authors: {\L}ukasz Korycki and Bartosz Krawczyk
- Abstract summary: We propose a framework for robust concept drift detection in the presence of adversarial and poisoning attacks.
We introduce the taxonomy for two types of adversarial concept drifts, as well as a robust trainable drift detector.
We also introduce Relative Loss of Robustness - a novel measure for evaluating the performance of concept drift detectors under poisoning attacks.
- Score: 15.49323098362628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continuous learning from streaming data is among the most challenging topics
in the contemporary machine learning. In this domain, learning algorithms must
not only be able to handle massive volumes of rapidly arriving data, but also
adapt themselves to potential emerging changes. The phenomenon of the evolving
nature of data streams is known as concept drift. While there is a plethora of
methods designed for detecting its occurrence, all of them assume that the
drift is connected with underlying changes in the source of data. However, one
must consider the possibility of a malicious injection of false data that
simulates a concept drift. This adversarial setting assumes a poisoning attack
that may be conducted in order to damage the underlying classification system
by forcing adaptation to false data. Existing drift detectors are not capable
of differentiating between real and adversarial concept drift. In this paper,
we propose a framework for robust concept drift detection in the presence of
adversarial and poisoning attacks. We introduce the taxonomy for two types of
adversarial concept drifts, as well as a robust trainable drift detector. It is
based on the augmented Restricted Boltzmann Machine with improved gradient
computation and energy function. We also introduce Relative Loss of Robustness
- a novel measure for evaluating the performance of concept drift detectors
under poisoning attacks. Extensive computational experiments, conducted on both
fully and sparsely labeled data streams, prove the high robustness and efficacy
of the proposed drift detection framework in adversarial scenarios.
Related papers
- Adversarial Attacks for Drift Detection [6.234802839923542]
This work studies the shortcomings of commonly used drift detection schemes.
We show how to construct data streams that are drifting without being detected.
In particular, we compute all possible adversairals for common detection schemes.
arXiv Detail & Related papers (2024-11-25T17:25:00Z) - Online Drift Detection with Maximum Concept Discrepancy [13.48123472458282]
We propose MCD-DD, a novel concept drift detection method based on maximum concept discrepancy.
Our method can adaptively identify varying forms of concept drift by contrastive learning of concept embeddings.
arXiv Detail & Related papers (2024-07-07T13:57:50Z) - Methods for Generating Drift in Text Streams [49.3179290313959]
Concept drift is a frequent phenomenon in real-world datasets and corresponds to changes in data distribution over time.
This paper provides four textual drift generation methods to ease the production of datasets with labeled drifts.
Results show that all methods have their performance degraded right after the drifts, and the incremental SVM is the fastest to run and recover the previous performance levels.
arXiv Detail & Related papers (2024-03-18T23:48:33Z) - How adversarial attacks can disrupt seemingly stable accurate classifiers [76.95145661711514]
Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data.
Here, we show that this may be seen as a fundamental feature of classifiers working with high dimensional input data.
We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability.
arXiv Detail & Related papers (2023-09-07T12:02:00Z) - Are Concept Drift Detectors Reliable Alarming Systems? -- A Comparative
Study [6.7961908135481615]
Concept drift, also known as concept drift, impacts the performance of machine learning models.
In this study, we assess the reliability of concept drift detectors to identify drift in time.
Our findings aim to help practitioners understand which drift detector should be employed in different situations.
arXiv Detail & Related papers (2022-11-23T16:31:15Z) - Real-time Object Detection for Streaming Perception [84.2559631820007]
Streaming perception is proposed to jointly evaluate the latency and accuracy into a single metric for video online perception.
We build a simple and effective framework for streaming perception.
Our method achieves competitive performance on Argoverse-HD dataset and improves the AP by 4.9% compared to the strong baseline.
arXiv Detail & Related papers (2022-03-23T11:33:27Z) - Detecting Concept Drift With Neural Network Model Uncertainty [0.0]
Uncertainty Drift Detection (UDD) is able to detect drifts without access to true labels.
In contrast to input data-based drift detection, our approach considers the effects of the current input data on the properties of the prediction model.
We show that UDD outperforms other state-of-the-art strategies on two synthetic as well as ten real-world data sets for both regression and classification tasks.
arXiv Detail & Related papers (2021-07-05T08:56:36Z) - Accumulative Poisoning Attacks on Real-time Data [56.96241557830253]
We show that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
arXiv Detail & Related papers (2021-06-18T08:29:53Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z) - DriftSurf: A Risk-competitive Learning Algorithm under Concept Drift [12.579800289829963]
When learning from streaming data, a change in the data distribution, also known as concept drift, can render a previously-learned model inaccurate.
We present an adaptive learning algorithm that extends previous drift-detection-based methods by incorporating drift detection into a broader stable-state/reactive-state process.
The algorithm is generic in its base learner and can be applied across a variety of supervised learning problems.
arXiv Detail & Related papers (2020-03-13T23:25:25Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.