Application of Adversarial Examples to Physical ECG Signals
- URL: http://arxiv.org/abs/2108.08972v1
- Date: Fri, 20 Aug 2021 02:30:17 GMT
- Title: Application of Adversarial Examples to Physical ECG Signals
- Authors: Taiga Ono (1), Takeshi Sugawara (2), Jun Sakuma (3), Tatsuya Mori (1
and 4) ((1) Waseda University, (2) The University of Electro-Communications,
(3) University of Tsukuba, (4) RIKEN AIP)
- Abstract summary: We introduce adversarial beats, which are perturbations tailored specifically against electrocardiograms (ECGs) beat-by-beat classification system.
We first formulate an algorithm to generate adversarial examples for the ECG classification neural network model, and study its attack success rate.
We then mount a hardware attack by designing a malicious signal generator which injects adversarial beats into ECG sensor readings.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work aims to assess the reality and feasibility of the adversarial
attack against cardiac diagnosis system powered by machine learning algorithms.
To this end, we introduce adversarial beats, which are adversarial
perturbations tailored specifically against electrocardiograms (ECGs)
beat-by-beat classification system. We first formulate an algorithm to generate
adversarial examples for the ECG classification neural network model, and study
its attack success rate. Next, to evaluate its feasibility in a physical
environment, we mount a hardware attack by designing a malicious signal
generator which injects adversarial beats into ECG sensor readings. To the best
of our knowledge, our work is the first in evaluating the proficiency of
adversarial examples for ECGs in a physical setup. Our real-world experiments
demonstrate that adversarial beats successfully manipulated the diagnosis
results 3-5 times out of 40 attempts throughout the course of 2 minutes.
Finally, we discuss the overall feasibility and impact of the attack, by
clearly defining motives and constraints of expected attackers along with our
experimental results.
Related papers
- Rethinking Targeted Adversarial Attacks For Neural Machine Translation [56.10484905098989]
This paper presents a new setting for NMT targeted adversarial attacks that could lead to reliable attacking results.
Under the new setting, it then proposes a Targeted Word Gradient adversarial Attack (TWGA) method to craft adversarial examples.
Experimental results demonstrate that our proposed setting could provide faithful attacking results for targeted adversarial attacks on NMT systems.
arXiv Detail & Related papers (2024-07-07T10:16:06Z) - NERULA: A Dual-Pathway Self-Supervised Learning Framework for Electrocardiogram Signal Analysis [5.8961928852930034]
We present NERULA, a self-supervised framework designed for single-lead ECG signals.
NERULA's dual-pathway architecture combines ECG reconstruction and non-contrastive learning to extract detailed cardiac features.
We show that combining generative and discriminative paths into the training spectrum leads to better results by outperforming state-of-the-art self-supervised learning benchmarks in various tasks.
arXiv Detail & Related papers (2024-05-21T14:01:57Z) - Hierarchical Deep Learning with Generative Adversarial Network for
Automatic Cardiac Diagnosis from ECG Signals [2.5008947886814186]
We propose a two-level hierarchical deep learning framework with Generative Adversarial Network (GAN) for automatic diagnosis of ECG signals.
The first-level model is composed of a Memory-Augmented Deep auto-Encoder with GAN, which aims to differentiate abnormal signals from normal ECGs for anomaly detection.
The second-level learning aims at robust multi-class classification for different arrhythmias identification.
arXiv Detail & Related papers (2022-10-19T12:29:05Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [57.46379460600939]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Defending Against Adversarial Attack in ECG Classification with
Adversarial Distillation Training [6.991425195643765]
In clinics, doctors rely on electrocardiograms (ECGs) to assess severe cardiac disorders.
Deep neural networks (DNNs) can be used to analyze these signals because of their high accuracy rate.
arXiv Detail & Related papers (2022-03-14T06:57:46Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - ECG-ATK-GAN: Robustness against Adversarial Attacks on ECG using
Conditional Generative Adversarial Networks [12.833916980261368]
Deep neural networks (DNN) are vulnerable to adversarial attacks, which can misclassify ECG signals.
We introduce a novel Conditional Generative Adversarial Network (GAN), robust against adversarial attacked ECG signals.
arXiv Detail & Related papers (2021-10-17T08:44:17Z) - ECG-Adv-GAN: Detecting ECG Adversarial Examples with Conditional
Generative Adversarial Networks [4.250203361580781]
Deep neural networks have become a popular technique for tracing ECG signals, outperforming human experts.
GAN architecture has been employed in recent works to synthesize adversarial ECG signals to increase existing training data.
We propose a novel Conditional Generative Adrial Network to simultaneously generate ECG signals for different categories and detect cardiac abnormalities.
arXiv Detail & Related papers (2021-07-16T02:53:14Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks [68.01125081367428]
Recent studies have shown that machine learning algorithms are vulnerable to adversarial attacks.
This article proposes to use narrow period pulse for poisoning attack of EEG-based BCIs, which is implementable in practice and has never been considered before.
arXiv Detail & Related papers (2020-10-30T20:49:42Z) - Adversarial Example Games [51.92698856933169]
Adrial Example Games (AEG) is a framework that models the crafting of adversarial examples.
AEG provides a new way to design adversarial examples by adversarially training a generator and aversa from a given hypothesis class.
We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-07-01T19:47:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.