Label Flipping Data Poisoning Attack Against Wearable Human Activity
Recognition System
- URL: http://arxiv.org/abs/2208.08433v1
- Date: Wed, 17 Aug 2022 17:52:13 GMT
- Title: Label Flipping Data Poisoning Attack Against Wearable Human Activity
Recognition System
- Authors: Abdur R. Shahid, Ahmed Imteaj, Peter Y. Wu, Diane A. Igoche, and
Tauhidul Alam
- Abstract summary: This paper presents the design of a label flipping data poisoning attack for a Human Activity Recognition (HAR) system.
Due to high noise and uncertainty in the sensing environment, such an attack poses a severe threat to the recognition system.
This paper shades light on how to carry out the attack in practice through smartphone-based sensor data collection applications.
- Score: 0.5284812806199193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human Activity Recognition (HAR) is a problem of interpreting sensor data to
human movement using an efficient machine learning (ML) approach. The HAR
systems rely on data from untrusted users, making them susceptible to data
poisoning attacks. In a poisoning attack, attackers manipulate the sensor
readings to contaminate the training set, misleading the HAR to produce
erroneous outcomes. This paper presents the design of a label flipping data
poisoning attack for a HAR system, where the label of a sensor reading is
maliciously changed in the data collection phase. Due to high noise and
uncertainty in the sensing environment, such an attack poses a severe threat to
the recognition system. Besides, vulnerability to label flipping attacks is
dangerous when activity recognition models are deployed in safety-critical
applications. This paper shades light on how to carry out the attack in
practice through smartphone-based sensor data collection applications. This is
an earlier research work, to our knowledge, that explores attacking the HAR
models via label flipping poisoning. We implement the proposed attack and test
it on activity recognition models based on the following machine learning
algorithms: multi-layer perceptron, decision tree, random forest, and XGBoost.
Finally, we evaluate the effectiveness of K-nearest neighbors (KNN)-based
defense mechanism against the proposed attack.
Related papers
- Enabling Privacy-Preserving Cyber Threat Detection with Federated Learning [4.475514208635884]
This study systematically profiles the (in)feasibility of learning for privacy-preserving cyber threat detection in terms of effectiveness, byzantine resilience, and efficiency.
It shows that FL-trained detection models can achieve a performance that is comparable to centrally trained counterparts.
Under a realistic threat model, FL turns out to be adversary-resistant to attacks of both data poisoning and model poisoning.
arXiv Detail & Related papers (2024-04-08T01:16:56Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Using Anomaly Detection to Detect Poisoning Attacks in Federated
Learning Applications [2.978389704820221]
Adversarial attacks such as poisoning attacks have attracted the attention of many machine learning researchers.
Traditionally, poisoning attacks attempt to inject adversarial training data in order to manipulate the trained model.
In federated learning (FL), data poisoning attacks can be generalized to model poisoning attacks, which cannot be detected by simpler methods due to the lack of access to local training data by the detector.
We propose a novel framework for detecting poisoning attacks in FL, which employs a reference model based on a public dataset and an auditor model to detect malicious updates.
arXiv Detail & Related papers (2022-07-18T10:10:45Z) - PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
Contrastive Learning [69.70602220716718]
We propose PoisonedEncoder, a data poisoning attack to contrastive learning.
In particular, an attacker injects carefully crafted poisoning inputs into the unlabeled pre-training data.
We evaluate five defenses against PoisonedEncoder, including one pre-processing, three in-processing, and one post-processing defenses.
arXiv Detail & Related papers (2022-05-13T00:15:44Z) - RobustSense: Defending Adversarial Attack for Secure Device-Free Human
Activity Recognition [37.387265457439476]
We propose a novel learning framework, RobustSense, to defend common adversarial attacks.
Our method works well on wireless human activity recognition and person identification systems.
arXiv Detail & Related papers (2022-04-04T15:06:03Z) - PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection
and Mitigation in Deep Neural Networks [22.900501880865658]
Backdoor attacks impose a new threat in Deep Neural Networks (DNNs)
We propose PiDAn, an algorithm based on coherence optimization purifying the poisoned data.
Our PiDAn algorithm can detect more than 90% infected classes and identify 95% poisoned samples.
arXiv Detail & Related papers (2022-03-17T12:37:21Z) - Classification Auto-Encoder based Detector against Diverse Data
Poisoning Attacks [7.150136251781658]
Poisoning attacks are a category of adversarial machine learning threats.
In this paper, we propose CAE, a Classification Auto-Encoder based detector against poisoned data.
We show that an enhanced version of CAE (called CAE+) does not have to employ a clean data set to train the defense model.
arXiv Detail & Related papers (2021-08-09T17:46:52Z) - Accumulative Poisoning Attacks on Real-time Data [56.96241557830253]
We show that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
arXiv Detail & Related papers (2021-06-18T08:29:53Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z) - Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
Data Poisoning Attacks [74.88735178536159]
Data poisoning is the number one concern among threats ranging from model stealing to adversarial attacks.
We observe that data poisoning and backdoor attacks are highly sensitive to variations in the testing setup.
We apply rigorous tests to determine the extent to which we should fear them.
arXiv Detail & Related papers (2020-06-22T18:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.