Defending Against Adversarial Attack in ECG Classification with
Adversarial Distillation Training
- URL: http://arxiv.org/abs/2203.09487v1
- Date: Mon, 14 Mar 2022 06:57:46 GMT
- Title: Defending Against Adversarial Attack in ECG Classification with
Adversarial Distillation Training
- Authors: Jiahao Shao, Shijia Geng, Zhaoji Fu, Weilun Xu, Tong Liu, Shenda Hong
- Abstract summary: In clinics, doctors rely on electrocardiograms (ECGs) to assess severe cardiac disorders.
Deep neural networks (DNNs) can be used to analyze these signals because of their high accuracy rate.
- Score: 6.991425195643765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In clinics, doctors rely on electrocardiograms (ECGs) to assess severe
cardiac disorders. Owing to the development of technology and the increase in
health awareness, ECG signals are currently obtained by using medical and
commercial devices. Deep neural networks (DNNs) can be used to analyze these
signals because of their high accuracy rate. However, researchers have found
that adversarial attacks can significantly reduce the accuracy of DNNs. Studies
have been conducted to defend ECG-based DNNs against traditional adversarial
attacks, such as projected gradient descent (PGD), and smooth adversarial
perturbation (SAP) which targets ECG classification; however, to the best of
our knowledge, no study has completely explored the defense against adversarial
attacks targeting ECG classification. Thus, we did different experiments to
explore the effects of defense methods against white-box adversarial attack and
black-box adversarial attack targeting ECG classification, and we found that
some common defense methods performed well against these attacks. Besides, we
proposed a new defense method called Adversarial Distillation Training (ADT)
which comes from defensive distillation and can effectively improve the
generalization performance of DNNs. The results show that our method performed
more effectively against adversarial attacks targeting on ECG classification
than the other baseline methods, namely, adversarial training, defensive
distillation, Jacob regularization, and noise-to-signal ratio regularization.
Furthermore, we found that our method performed better against PGD attacks with
low noise levels, which means that our method has stronger robustness.
Related papers
- The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors [0.0]
An adaptive attack is one where the attacker is aware of the defenses and adapts their strategy accordingly.
Our proposed method leverages adversarial training to reinforce the ability to detect attacks, without compromising clean accuracy.
Experimental evaluations on the CIFAR-10 and SVHN datasets demonstrate that our proposed algorithm significantly improves a detector's ability to accurately identify adaptive adversarial attacks.
arXiv Detail & Related papers (2024-04-18T12:13:09Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - A Regularization Method to Improve Adversarial Robustness of Neural
Networks for ECG Signal Classification [1.8579693774597703]
Electrocardiogram (ECG) is the most widely used diagnostic tool to monitor the condition of the human heart.
Deep neural networks (DNNs) interpretation of ECG signals can be fully automated for the identification of potential abnormalities in a patient's heart in a fraction of a second.
DNNs are highly vulnerable to adversarial noises that are subtle changes in the input of a DNN and may lead to a wrong class-label prediction.
We propose a regularization method to improve robustness from the perspective of noise-to-signal ratio (NSR) for the application of ECG signal classification.
arXiv Detail & Related papers (2021-10-19T06:22:02Z) - ECG-ATK-GAN: Robustness against Adversarial Attacks on ECG using
Conditional Generative Adversarial Networks [12.833916980261368]
Deep neural networks (DNN) are vulnerable to adversarial attacks, which can misclassify ECG signals.
We introduce a novel Conditional Generative Adversarial Network (GAN), robust against adversarial attacked ECG signals.
arXiv Detail & Related papers (2021-10-17T08:44:17Z) - Application of Adversarial Examples to Physical ECG Signals [0.0]
We introduce adversarial beats, which are perturbations tailored specifically against electrocardiograms (ECGs) beat-by-beat classification system.
We first formulate an algorithm to generate adversarial examples for the ECG classification neural network model, and study its attack success rate.
We then mount a hardware attack by designing a malicious signal generator which injects adversarial beats into ECG sensor readings.
arXiv Detail & Related papers (2021-08-20T02:30:17Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - Stochastic Security: Adversarial Defense Using Long-Run Dynamics of
Energy-Based Models [82.03536496686763]
The vulnerability of deep networks to adversarial attacks is a central problem for deep learning from the perspective of both cognition and security.
We focus on defending naturally-trained classifiers using Markov Chain Monte Carlo (MCMC) sampling with an Energy-Based Model (EBM) for adversarial purification.
Our contributions are 1) an improved method for training EBM's with realistic long-run MCMC samples, 2) Expectation-Over-Transformation (EOT) defense that resolves theoretical ambiguities for defenses, and 3) state-of-the-art adversarial defense for naturally-trained classifiers and competitive defense.
arXiv Detail & Related papers (2020-05-27T17:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.