Robust Prototypical Few-Shot Organ Segmentation with Regularized
Neural-ODEs
- URL: http://arxiv.org/abs/2208.12428v1
- Date: Fri, 26 Aug 2022 03:53:04 GMT
- Title: Robust Prototypical Few-Shot Organ Segmentation with Regularized
Neural-ODEs
- Authors: Prashant Pandey, Mustafa Chasmai, Tanuj Sur, Brejesh Lall
- Abstract summary: We propose Regularized Prototypical Neural Ordinary Differential Equation (R-PNODE)
R-PNODE constrains support and query features from the same classes to lie closer in the representation space.
We show that R-PNODE exhibits increased adversarial robustness for a wide array of these attacks.
- Score: 10.054960979867584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the tremendous progress made by deep learning models in image
semantic segmentation, they typically require large annotated examples, and
increasing attention is being diverted to problem settings like Few-Shot
Learning (FSL) where only a small amount of annotation is needed for
generalisation to novel classes. This is especially seen in medical domains
where dense pixel-level annotations are expensive to obtain. In this paper, we
propose Regularized Prototypical Neural Ordinary Differential Equation
(R-PNODE), a method that leverages intrinsic properties of Neural-ODEs,
assisted and enhanced by additional cluster and consistency losses to perform
Few-Shot Segmentation (FSS) of organs. R-PNODE constrains support and query
features from the same classes to lie closer in the representation space
thereby improving the performance over the existing Convolutional Neural
Network (CNN) based FSS methods. We further demonstrate that while many
existing Deep CNN based methods tend to be extremely vulnerable to adversarial
attacks, R-PNODE exhibits increased adversarial robustness for a wide array of
these attacks. We experiment with three publicly available multi-organ
segmentation datasets in both in-domain and cross-domain FSS settings to
demonstrate the efficacy of our method. In addition, we perform experiments
with seven commonly used adversarial attacks in various settings to demonstrate
R-PNODE's robustness. R-PNODE outperforms the baselines for FSS by significant
margins and also shows superior performance for a wide array of attacks varying
in intensity and design.
Related papers
- MOREL: Enhancing Adversarial Robustness through Multi-Objective Representation Learning [1.534667887016089]
deep neural networks (DNNs) are vulnerable to slight adversarial perturbations.
We show that strong feature representation learning during training can significantly enhance the original model's robustness.
We propose MOREL, a multi-objective feature representation learning approach, encouraging classification models to produce similar features for inputs within the same class, despite perturbations.
arXiv Detail & Related papers (2024-10-02T16:05:03Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - General Adversarial Defense Against Black-box Attacks via Pixel Level
and Feature Level Distribution Alignments [75.58342268895564]
We use Deep Generative Networks (DGNs) with a novel training mechanism to eliminate the distribution gap.
The trained DGNs align the distribution of adversarial samples with clean ones for the target DNNs by translating pixel values.
Our strategy demonstrates its unique effectiveness and generality against black-box attacks.
arXiv Detail & Related papers (2022-12-11T01:51:31Z) - Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEs [9.372231811393583]
Few-shot Learning methods are being adopted in settings where data is not abundantly available.
Deep Neural Networks have been shown to be vulnerable to adversarial attacks.
We provide a framework to make few-shot segmentation models adversarially robust in the medical domain.
arXiv Detail & Related papers (2022-10-07T10:00:45Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Learnable Multi-level Frequency Decomposition and Hierarchical Attention
Mechanism for Generalized Face Presentation Attack Detection [7.324459578044212]
Face presentation attack detection (PAD) is attracting a lot of attention and playing a key role in securing face recognition systems.
We propose a dual-stream convolution neural networks (CNNs) framework to deal with unseen scenarios.
We successfully prove the design of our proposed PAD solution in a step-wise ablation study.
arXiv Detail & Related papers (2021-09-16T13:06:43Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Contextual Fusion For Adversarial Robustness [0.0]
Deep neural networks are usually designed to process one particular information stream and susceptible to various types of adversarial perturbations.
We developed a fusion model using a combination of background and foreground features extracted in parallel from Places-CNN and Imagenet-CNN.
For gradient based attacks, our results show that fusion allows for significant improvements in classification without decreasing performance on unperturbed data.
arXiv Detail & Related papers (2020-11-18T20:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.