Explainable Label-flipping Attacks on Human Emotion Assessment System
- URL: http://arxiv.org/abs/2302.04109v1
- Date: Wed, 8 Feb 2023 15:04:27 GMT
- Title: Explainable Label-flipping Attacks on Human Emotion Assessment System
- Authors: Zhibo Zhang, Ahmed Y. Al Hammadi, Ernesto Damiani, and Chan Yeob Yeun
- Abstract summary: This paper provides an attacker's point of view on data poisoning assaults that use label-flipping.
The proposed data poison attacksm based on label-flipping are successful regardless of the model.
XAI techniques are used to explain the data poison attacks on EEG signal-based human emotion evaluation systems.
- Score: 4.657100266392171
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper's main goal is to provide an attacker's point of view on data
poisoning assaults that use label-flipping during the training phase of systems
that use electroencephalogram (EEG) signals to evaluate human emotion. To
attack different machine learning classifiers such as Adaptive Boosting
(AdaBoost) and Random Forest dedicated to the classification of 4 different
human emotions using EEG signals, this paper proposes two scenarios of
label-flipping methods. The results of the studies show that the proposed data
poison attacksm based on label-flipping are successful regardless of the model,
but different models show different degrees of resistance to the assaults. In
addition, numerous Explainable Artificial Intelligence (XAI) techniques are
used to explain the data poison attacks on EEG signal-based human emotion
evaluation systems.
Related papers
- Poisoning Attacks and Defenses in Recommender Systems: A Survey [39.25402612579371]
Modern recommender systems (RS) have profoundly enhanced user experience across digital platforms, yet they face significant threats from poisoning attacks.
This survey presents a unique perspective by examining these threats through the lens of an attacker.
We detail a systematic pipeline that encompasses four stages of a poisoning attack: setting attack goals, assessing attacker capabilities, analyzing victim architecture, and implementing poisoning strategies.
arXiv Detail & Related papers (2024-06-03T06:08:02Z) - SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - IDEA: Invariant Defense for Graph Adversarial Robustness [60.0126873387533]
We propose an Invariant causal DEfense method against adversarial Attacks (IDEA)
We derive node-based and structure-based invariance objectives from an information-theoretic perspective.
Experiments demonstrate that IDEA attains state-of-the-art defense performance under all five attacks on all five datasets.
arXiv Detail & Related papers (2023-05-25T07:16:00Z) - Explainable Data Poison Attacks on Human Emotion Evaluation Systems
based on EEG Signals [3.8523826400372783]
This paper explains the data poisoning attacks using label-flipping during the training stage of the electroencephalogram (EEG) signal-based human emotion evaluation systems.
EEG signal-based human emotion evaluation systems have shown several vulnerabilities to data poison attacks.
arXiv Detail & Related papers (2023-01-17T14:44:46Z) - Label Flipping Data Poisoning Attack Against Wearable Human Activity
Recognition System [0.5284812806199193]
This paper presents the design of a label flipping data poisoning attack for a Human Activity Recognition (HAR) system.
Due to high noise and uncertainty in the sensing environment, such an attack poses a severe threat to the recognition system.
This paper shades light on how to carry out the attack in practice through smartphone-based sensor data collection applications.
arXiv Detail & Related papers (2022-08-17T17:52:13Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks [17.646155241759743]
De-Pois is an attack-agnostic defense against poisoning attacks.
We implement four types of poisoning attacks and evaluate De-Pois with five typical defense methods.
arXiv Detail & Related papers (2021-05-08T04:47:37Z) - Adversarial Attack Attribution: Discovering Attributable Signals in
Adversarial ML Attacks [0.7883722807601676]
Even production systems, such as self-driving cars and ML-as-a-service offerings, are susceptible to adversarial inputs.
Can perturbed inputs be attributed to the methods used to generate the attack?
We introduce the concept of adversarial attack attribution and create a simple supervised learning experimental framework to examine the feasibility of discovering attributable signals in adversarial attacks.
arXiv Detail & Related papers (2021-01-08T08:16:41Z) - Poison Attacks against Text Datasets with Conditional Adversarially
Regularized Autoencoder [78.01180944665089]
This paper demonstrates a fatal vulnerability in natural language inference (NLI) and text classification systems.
We present a 'backdoor poisoning' attack on NLP models.
arXiv Detail & Related papers (2020-10-06T13:03:49Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.