Adversarial Robustness Study of Convolutional Neural Network for Lumbar
Disk Shape Reconstruction from MR images
- URL: http://arxiv.org/abs/2102.02885v1
- Date: Thu, 4 Feb 2021 20:57:49 GMT
- Title: Adversarial Robustness Study of Convolutional Neural Network for Lumbar
Disk Shape Reconstruction from MR images
- Authors: Jiasong Chen, Linchen Qian, Timur Urakov, Weiyong Gu, Liang Liang
- Abstract summary: In this study, we investigated the in-distribution (IND) and out-of-distribution (OOD) adversarial robustness of a representative CNN for lumbar disk shape reconstruction from spine MR images.
The results show that IND adversarial training can improve the CNN robustness to IND adversarial attacks, and larger training datasets may lead to higher IND robustness.
- Score: 1.2809525640002362
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning technologies using deep neural networks (DNNs), especially
convolutional neural networks (CNNs), have made automated, accurate, and fast
medical image analysis a reality for many applications, and some DNN-based
medical image analysis systems have even been FDA-cleared. Despite the
progress, challenges remain to build DNNs as reliable as human expert doctors.
It is known that DNN classifiers may not be robust to noises: by adding a small
amount of noise to an input image, a DNN classifier may make a wrong
classification of the noisy image (i.e., in-distribution adversarial sample),
whereas it makes the right classification of the clean image. Another issue is
caused by out-of-distribution samples that are not similar to any sample in the
training set. Given such a sample as input, the output of a DNN will become
meaningless. In this study, we investigated the in-distribution (IND) and
out-of-distribution (OOD) adversarial robustness of a representative CNN for
lumbar disk shape reconstruction from spine MR images. To study the
relationship between dataset size and robustness to IND adversarial attacks, we
used a data augmentation method to create training sets with different levels
of shape variations. We utilized the PGD-based algorithm for IND adversarial
attacks and extended it for OOD adversarial attacks to generate OOD adversarial
samples for model testing. The results show that IND adversarial training can
improve the CNN robustness to IND adversarial attacks, and larger training
datasets may lead to higher IND robustness. However, it is still a challenge to
defend against OOD adversarial attacks.
Related papers
- Defending Spiking Neural Networks against Adversarial Attacks through Image Purification [20.492531851480784]
Spiking Neural Networks (SNNs) aim to bridge the gap between neuroscience and machine learning.
SNNs are vulnerable to adversarial attacks like convolutional neural networks.
We propose a biologically inspired methodology to enhance the robustness of SNNs.
arXiv Detail & Related papers (2024-04-26T00:57:06Z) - Adversarial Training Using Feedback Loops [1.6114012813668932]
Deep neural networks (DNNs) are highly susceptible to adversarial attacks due to limited generalizability.
This paper proposes a new robustification approach based on control theory.
The novel adversarial training approach based on the feedback control architecture is called Feedback Looped Adversarial Training (FLAT)
arXiv Detail & Related papers (2023-08-23T02:58:02Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - Detection of out-of-distribution samples using binary neuron activation
patterns [0.26249027950824505]
The ability to identify previously unseen inputs as novel is crucial in safety-critical applications such as self-driving cars, unmanned aerial vehicles, and robots.
Existing approaches to detect OOD samples treat a DNN as a black box and evaluate the confidence score of the output predictions.
In this work, we introduce a novel method for OOD detection. Our method is motivated by theoretical analysis of neuron activation patterns (NAP) in ReLU-based architectures.
arXiv Detail & Related papers (2022-12-29T11:42:46Z) - Robust Sensible Adversarial Learning of Deep Neural Networks for Image
Classification [6.594522185216161]
We introduce sensible adversarial learning and demonstrate the synergistic effect between pursuits of standard natural accuracy and robustness.
Specifically, we define a sensible adversary which is useful for learning a robust model while keeping high natural accuracy.
We propose a novel and efficient algorithm that trains a robust model using implicit loss truncation.
arXiv Detail & Related papers (2022-05-20T22:57:44Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Topological Measurement of Deep Neural Networks Using Persistent
Homology [0.7919213739992464]
The inner representation of deep neural networks (DNNs) is indecipherable.
Persistent homology (PH) was employed for investigating the complexities of trained DNNs.
arXiv Detail & Related papers (2021-06-06T03:06:15Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.