Adaptive Adversarial Training to Improve Adversarial Robustness of DNNs
for Medical Image Segmentation and Detection
- URL: http://arxiv.org/abs/2206.01736v1
- Date: Thu, 2 Jun 2022 20:17:53 GMT
- Title: Adaptive Adversarial Training to Improve Adversarial Robustness of DNNs
for Medical Image Segmentation and Detection
- Authors: Linhai Ma and Liang Liang
- Abstract summary: It is known that Deep Neural Networks (DNNs) are vulnerable to adversarial attacks.
Standard adversarial training (SAT) method has a severe issue that limits its practical use.
We show that our AMAT method outperforms the SAT method in adversarial robustness on noisy data and prediction accuracy on clean data.
- Score: 2.2977141788872366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent methods based on Deep Neural Networks (DNNs) have reached high
accuracy for medical image analysis, including the three basic tasks:
segmentation, landmark detection, and object detection. It is known that DNNs
are vulnerable to adversarial attacks, and the adversarial robustness of DNNs
could be improved by adding adversarial noises to training data (i.e.,
adversarial training). In this study, we show that the standard adversarial
training (SAT) method has a severe issue that limits its practical use: it
generates a fixed level of noise for DNN training, and it is difficult for the
user to choose an appropriate noise level, because a high noise level may lead
to a large reduction in model performance, and a low noise level may have
little effect. To resolve this issue, we have designed a novel adaptive-margin
adversarial training (AMAT) method that generates adaptive adversarial noises
for DNN training, which are dynamically tailored for each individual training
sample. We have applied our AMAT method to state-of-the-art DNNs for the three
basic tasks, using five publicly available datasets. The experimental results
demonstrate that our AMAT method outperforms the SAT method in adversarial
robustness on noisy data and prediction accuracy on clean data. Please contact
the author for the source code.
Related papers
- Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Unsupervised Noise adaptation using Data Simulation [21.866522173387715]
We propose a generative adversarial network based method to efficiently learn a converse clean-to-noisy transformation.
Experimental results show that our method effectively mitigates the domain mismatch between training and test sets.
arXiv Detail & Related papers (2023-02-23T12:57:20Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - AccelAT: A Framework for Accelerating the Adversarial Training of Deep
Neural Networks through Accuracy Gradient [12.118084418840152]
Adrial training is exploited to develop a robust Deep Neural Network (DNN) model against malicious altered data.
This paper aims at accelerating the adversarial training to enable fast development of robust DNN models against adversarial attacks.
arXiv Detail & Related papers (2022-10-13T10:31:51Z) - Towards Adversarially Robust Deep Image Denoising [199.2458715635285]
This work systematically investigates the adversarial robustness of deep image denoisers (DIDs)
We propose a novel adversarial attack, namely Observation-based Zero-mean Attack (sc ObsAtk) to craft adversarial zero-mean perturbations on given noisy images.
To robustify DIDs, we propose hybrid adversarial training (sc HAT) that jointly trains DIDs with adversarial and non-adversarial noisy data.
arXiv Detail & Related papers (2022-01-12T10:23:14Z) - Adversarial Robustness Study of Convolutional Neural Network for Lumbar
Disk Shape Reconstruction from MR images [1.2809525640002362]
In this study, we investigated the in-distribution (IND) and out-of-distribution (OOD) adversarial robustness of a representative CNN for lumbar disk shape reconstruction from spine MR images.
The results show that IND adversarial training can improve the CNN robustness to IND adversarial attacks, and larger training datasets may lead to higher IND robustness.
arXiv Detail & Related papers (2021-02-04T20:57:49Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - A Unified Plug-and-Play Framework for Effective Data Denoising and
Robust Abstention [4.200576272300216]
We propose a unified filtering framework leveraging underlying data density.
Our framework can effectively denoising training data and avoid predicting uncertain test data points.
arXiv Detail & Related papers (2020-09-25T04:18:08Z) - Self-Competitive Neural Networks [0.0]
Deep Neural Networks (DNNs) have improved the accuracy of classification problems in lots of applications.
One of the challenges in training a DNN is its need to be fed by an enriched dataset to increase its accuracy and avoid it suffering from overfitting.
Recently, researchers have worked extensively to propose methods for data augmentation.
In this paper, we generate adversarial samples to refine the Domains of Attraction (DoAs) of each class. In this approach, at each stage, we use the model learned by the primary and generated adversarial data (up to that stage) to manipulate the primary data in a way that look complicated to
arXiv Detail & Related papers (2020-08-22T12:28:35Z) - Rectified Meta-Learning from Noisy Labels for Robust Image-based Plant
Disease Diagnosis [64.82680813427054]
Plant diseases serve as one of main threats to food security and crop production.
One popular approach is to transform this problem as a leaf image classification task, which can be addressed by the powerful convolutional neural networks (CNNs)
We propose a novel framework that incorporates rectified meta-learning module into common CNN paradigm to train a noise-robust deep network without using extra supervision information.
arXiv Detail & Related papers (2020-03-17T09:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.