Frequency Domain Adversarial Training for Robust Volumetric Medical
Segmentation
- URL: http://arxiv.org/abs/2307.07269v2
- Date: Thu, 20 Jul 2023 17:59:25 GMT
- Title: Frequency Domain Adversarial Training for Robust Volumetric Medical
Segmentation
- Authors: Asif Hanif, Muzammal Naseer, Salman Khan, Mubarak Shah, Fahad Shahbaz
Khan
- Abstract summary: It is imperative to ensure the robustness of deep learning models in critical applications such as, healthcare.
We present a 3D frequency domain adversarial attack for volumetric medical image segmentation models.
- Score: 111.61781272232646
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is imperative to ensure the robustness of deep learning models in critical
applications such as, healthcare. While recent advances in deep learning have
improved the performance of volumetric medical image segmentation models, these
models cannot be deployed for real-world applications immediately due to their
vulnerability to adversarial attacks. We present a 3D frequency domain
adversarial attack for volumetric medical image segmentation models and
demonstrate its advantages over conventional input or voxel domain attacks.
Using our proposed attack, we introduce a novel frequency domain adversarial
training approach for optimizing a robust model against voxel and frequency
domain attacks. Moreover, we propose frequency consistency loss to regulate our
frequency domain adversarial training that achieves a better tradeoff between
model's performance on clean and adversarial samples. Code is publicly
available at https://github.com/asif-hanif/vafa.
Related papers
- FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks [42.18755809782401]
Deep neural networks are known to be vulnerable to security risks due to the inherent transferable nature of adversarial examples.
We propose a feature contrastive approach in the frequency domain to generate adversarial examples that are robust in both cross-domain and cross-model settings.
We demonstrate strong transferability of our generated adversarial perturbations through extensive cross-domain and cross-model experiments.
arXiv Detail & Related papers (2024-07-30T08:50:06Z) - On Evaluating Adversarial Robustness of Volumetric Medical Segmentation Models [59.45628259925441]
Volumetric medical segmentation models have achieved significant success on organ and tumor-based segmentation tasks.
Their vulnerability to adversarial attacks remains largely unexplored.
This underscores the importance of investigating the robustness of existing models.
arXiv Detail & Related papers (2024-06-12T17:59:42Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Phase-shifted Adversarial Training [8.89749787668458]
We analyze the behavior of adversarial training through the lens of response frequency.
PhaseAT significantly improves the convergence for high-frequency information.
This results in improved adversarial robustness by enabling the model to have smoothed predictions near each data.
arXiv Detail & Related papers (2023-01-12T02:25:22Z) - Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEs [9.372231811393583]
Few-shot Learning methods are being adopted in settings where data is not abundantly available.
Deep Neural Networks have been shown to be vulnerable to adversarial attacks.
We provide a framework to make few-shot segmentation models adversarially robust in the medical domain.
arXiv Detail & Related papers (2022-10-07T10:00:45Z) - Frequency Domain Model Augmentation for Adversarial Attack [91.36850162147678]
For black-box attacks, the gap between the substitute model and the victim model is usually large.
We propose a novel spectrum simulation attack to craft more transferable adversarial examples against both normally trained and defense models.
arXiv Detail & Related papers (2022-07-12T08:26:21Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Practical No-box Adversarial Attacks with Training-free Hybrid Image
Transformation [123.33816363589506]
We show the existence of a textbftraining-free adversarial perturbation under the no-box threat model.
Motivated by our observation that high-frequency component (HFC) domains in low-level features, we attack an image mainly by manipulating its frequency components.
Our method is even competitive to mainstream transfer-based black-box attacks.
arXiv Detail & Related papers (2022-03-09T09:51:00Z) - Frequency-based Automated Modulation Classification in the Presence of
Adversaries [17.930854969511046]
We present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference.
In this work, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs)
arXiv Detail & Related papers (2020-11-02T17:12:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.