Dynamic Perturbation-Adaptive Adversarial Training on Medical Image
Classification
- URL: http://arxiv.org/abs/2403.06798v1
- Date: Mon, 11 Mar 2024 15:16:20 GMT
- Title: Dynamic Perturbation-Adaptive Adversarial Training on Medical Image
Classification
- Authors: Shuai Li, Xiaoguang Ma, Shancheng Jiang, and Lu Meng
- Abstract summary: adversarial examples (AEs) exhibited imperceptible similarity with raw data, raising serious concerns on network robustness.
In this paper, we propose a dynamic perturbation-adaptive adversarial training (DPAAT) method, which placed AT in a dynamic learning environment to generate adaptive data-level perturbations.
Comprehensive testing on dermatology HAM10000 dataset showed that the DPAAT not only achieved better robustness improvement and generalization preservation but also significantly enhanced mean average precision and interpretability.
- Score: 9.039586043401972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Remarkable successes were made in Medical Image Classification (MIC)
recently, mainly due to wide applications of convolutional neural networks
(CNNs). However, adversarial examples (AEs) exhibited imperceptible similarity
with raw data, raising serious concerns on network robustness. Although
adversarial training (AT), in responding to malevolent AEs, was recognized as
an effective approach to improve robustness, it was challenging to overcome
generalization decline of networks caused by the AT. In this paper, in order to
reserve high generalization while improving robustness, we proposed a dynamic
perturbation-adaptive adversarial training (DPAAT) method, which placed AT in a
dynamic learning environment to generate adaptive data-level perturbations and
provided a dynamically updated criterion by loss information collections to
handle the disadvantage of fixed perturbation sizes in conventional AT methods
and the dependence on external transference. Comprehensive testing on
dermatology HAM10000 dataset showed that the DPAAT not only achieved better
robustness improvement and generalization preservation but also significantly
enhanced mean average precision and interpretability on various CNNs,
indicating its great potential as a generic adversarial training method on the
MIC.
Related papers
- Beyond Dropout: Robust Convolutional Neural Networks Based on Local Feature Masking [6.189613073024831]
This study introduces an innovative Local Feature Masking (LFM) strategy aimed at fortifying the performance of Convolutional Neural Networks (CNNs)
During the training phase, we strategically incorporate random feature masking in the shallow layers of CNNs.
LFM compels the network to adapt by leveraging remaining features to compensate for the absence of certain semantic features.
arXiv Detail & Related papers (2024-07-18T16:25:16Z) - AdaAugment: A Tuning-Free and Adaptive Approach to Enhance Data Augmentation [12.697608744311122]
AdaAugment is a tuning-free Adaptive Augmentation method.
It dynamically adjusts augmentation magnitudes for individual training samples based on real-time feedback from the target network.
It consistently outperforms other state-of-the-art DA methods in effectiveness while maintaining remarkable efficiency.
arXiv Detail & Related papers (2024-05-19T06:54:03Z) - The Effectiveness of Random Forgetting for Robust Generalization [21.163070161951868]
We introduce a novel learning paradigm called "Forget to Mitigate Overfitting" (FOMO)
FOMO alternates between the forgetting phase, which randomly forgets a subset of weights, and the relearning phase, which emphasizes learning generalizable features.
Our experiments show that FOMO alleviates robust overfitting by significantly reducing the gap between the best and last robust test accuracy.
arXiv Detail & Related papers (2024-02-18T23:14:40Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Addressing Mistake Severity in Neural Networks with Semantic Knowledge [0.0]
Most robust training techniques aim to improve model accuracy on perturbed inputs.
As an alternate form of robustness, we aim to reduce the severity of mistakes made by neural networks in challenging conditions.
We leverage current adversarial training methods to generate targeted adversarial attacks during the training process.
Results demonstrate that our approach performs better with respect to mistake severity compared to standard and adversarially trained models.
arXiv Detail & Related papers (2022-11-21T22:01:36Z) - Boosting Adversarial Robustness From The Perspective of Effective Margin
Regularization [58.641705224371876]
The adversarial vulnerability of deep neural networks (DNNs) has been actively investigated in the past several years.
This paper investigates the scale-variant property of cross-entropy loss, which is the most commonly used loss function in classification tasks.
We show that the proposed effective margin regularization (EMR) learns large effective margins and boosts the adversarial robustness in both standard and adversarial training.
arXiv Detail & Related papers (2022-10-11T03:16:56Z) - Improved and Interpretable Defense to Transferred Adversarial Examples
by Jacobian Norm with Selective Input Gradient Regularization [31.516568778193157]
Adversarial training (AT) is often adopted to improve the robustness of deep neural networks (DNNs)
In this work, we propose an approach based on Jacobian norm and Selective Input Gradient Regularization (J- SIGR)
Experiments demonstrate that the proposed J- SIGR confers improved robustness against transferred adversarial attacks, and we also show that the predictions from the neural network are easy to interpret.
arXiv Detail & Related papers (2022-07-09T01:06:41Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Guided Interpolation for Adversarial Training [73.91493448651306]
As training progresses, the training data becomes less and less attackable, undermining the robustness enhancement.
We propose the guided framework (GIF), which employs the previous epoch's meta information to guide the data's adversarial variants.
Compared with the vanilla mixup, the GIF can provide a higher ratio of attackable data, which is beneficial to the robustness enhancement.
arXiv Detail & Related papers (2021-02-15T03:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.