Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEs
- URL: http://arxiv.org/abs/2210.03429v1
- Date: Fri, 7 Oct 2022 10:00:45 GMT
- Title: Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEs
- Authors: Prashant Pandey, Aleti Vardhan, Mustafa Chasmai, Tanuj Sur, Brejesh
Lall
- Abstract summary: Few-shot Learning methods are being adopted in settings where data is not abundantly available.
Deep Neural Networks have been shown to be vulnerable to adversarial attacks.
We provide a framework to make few-shot segmentation models adversarially robust in the medical domain.
- Score: 9.372231811393583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot Learning (FSL) methods are being adopted in settings where data is
not abundantly available. This is especially seen in medical domains where the
annotations are expensive to obtain. Deep Neural Networks have been shown to be
vulnerable to adversarial attacks. This is even more severe in the case of FSL
due to the lack of a large number of training examples. In this paper, we
provide a framework to make few-shot segmentation models adversarially robust
in the medical domain where such attacks can severely impact the decisions made
by clinicians who use them. We propose a novel robust few-shot segmentation
framework, Prototypical Neural Ordinary Differential Equation (PNODE), that
provides defense against gradient-based adversarial attacks. We show that our
framework is more robust compared to traditional adversarial defense mechanisms
such as adversarial training. Adversarial training involves increased training
time and shows robustness to limited types of attacks depending on the type of
adversarial examples seen during training. Our proposed framework generalises
well to common adversarial attacks like FGSM, PGD and SMIA while having the
model parameters comparable to the existing few-shot segmentation models. We
show the effectiveness of our proposed approach on three publicly available
multi-organ segmentation datasets in both in-domain and cross-domain settings
by attacking the support and query sets without the need for ad-hoc adversarial
training.
Related papers
- Frequency Domain Adversarial Training for Robust Volumetric Medical
Segmentation [111.61781272232646]
It is imperative to ensure the robustness of deep learning models in critical applications such as, healthcare.
We present a 3D frequency domain adversarial attack for volumetric medical image segmentation models.
arXiv Detail & Related papers (2023-07-14T10:50:43Z) - Robust Prototypical Few-Shot Organ Segmentation with Regularized
Neural-ODEs [10.054960979867584]
We propose Regularized Prototypical Neural Ordinary Differential Equation (R-PNODE)
R-PNODE constrains support and query features from the same classes to lie closer in the representation space.
We show that R-PNODE exhibits increased adversarial robustness for a wide array of these attacks.
arXiv Detail & Related papers (2022-08-26T03:53:04Z) - SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and
Boosting Segmentation Robustness [63.726895965125145]
Deep neural network-based image classifications are vulnerable to adversarial perturbations.
In this work, we propose an effective and efficient segmentation attack method, dubbed SegPGD.
Since SegPGD can create more effective adversarial examples, the adversarial training with our SegPGD can boost the robustness of segmentation models.
arXiv Detail & Related papers (2022-07-25T17:56:54Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Practical No-box Adversarial Attacks against DNNs [31.808770437120536]
We investigate no-box adversarial examples, where the attacker can neither access the model information or the training set nor query the model.
We propose three mechanisms for training with a very small dataset and find that prototypical reconstruction is the most effective.
Our approach significantly diminishes the average prediction accuracy of the system to only 15.40%, which is on par with the attack that transfers adversarial examples from a pre-trained Arcface model.
arXiv Detail & Related papers (2020-12-04T11:10:03Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Adversarial Feature Desensitization [12.401175943131268]
We propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field.
Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs.
arXiv Detail & Related papers (2020-06-08T14:20:02Z) - Class-Aware Domain Adaptation for Improving Adversarial Robustness [27.24720754239852]
adversarial training has been proposed to train networks by injecting adversarial examples into the training data.
We propose a novel Class-Aware Domain Adaptation (CADA) method for adversarial defense without directly applying adversarial training.
arXiv Detail & Related papers (2020-05-10T03:45:19Z) - Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic
Segmentation [79.42338812621874]
Adversarial training is promising for improving robustness of deep neural networks towards adversarial perturbations.
We formulate a general adversarial training procedure that can perform decently on both adversarial and clean samples.
We propose a dynamic divide-and-conquer adversarial training (DDC-AT) strategy to enhance the defense effect.
arXiv Detail & Related papers (2020-03-14T05:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.