Generalization Capabilities of Neural Cellular Automata for Medical Image Segmentation: A Robust and Lightweight Approach
- URL: http://arxiv.org/abs/2408.15557v1
- Date: Wed, 28 Aug 2024 06:18:55 GMT
- Title: Generalization Capabilities of Neural Cellular Automata for Medical Image Segmentation: A Robust and Lightweight Approach
- Authors: Steven Korevaar, Ruwan Tennakoon, Alireza Bab-Hadiashar,
- Abstract summary: U-Nets exhibit a significant decline in performance when tested on data that deviates from the training distribution.
This paper investigates the implications of utilizing models that are smaller by three orders of magnitude (i.e., x1000) compared to a conventional U-Net.
- Score: 6.537479355990391
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In the field of medical imaging, the U-Net architecture, along with its variants, has established itself as a cornerstone for image segmentation tasks, particularly due to its strong performance when trained on limited datasets. Despite its impressive performance on identically distributed (in-domain) data, U-Nets exhibit a significant decline in performance when tested on data that deviates from the training distribution, out-of-distribution (out-of-domain) data. Current methodologies predominantly address this issue by employing generalization techniques that hinge on various forms of regularization, which have demonstrated moderate success in specific scenarios. This paper, however, ventures into uncharted territory by investigating the implications of utilizing models that are smaller by three orders of magnitude (i.e., x1000) compared to a conventional U-Net. A reduction of this size in U-net parameters typically adversely affects both in-domain and out-of-domain performance, possibly due to a significantly reduced receptive field. To circumvent this issue, we explore the concept of Neural Cellular Automata (NCA), which, despite its simpler model structure, can attain larger receptive fields through recursive processes. Experimental results on two distinct datasets reveal that NCA outperforms traditional methods in terms of generalization, while still maintaining a commendable IID performance.
Related papers
- Enhancing Fine-Grained Visual Recognition in the Low-Data Regime Through Feature Magnitude Regularization [23.78498670529746]
We introduce a regularization technique to ensure that the magnitudes of the extracted features are evenly distributed.
Despite its apparent simplicity, our approach has demonstrated significant performance improvements across various fine-grained visual recognition datasets.
arXiv Detail & Related papers (2024-09-03T07:32:46Z) - Few-shot Online Anomaly Detection and Segmentation [29.693357653538474]
This paper focuses on addressing the challenging yet practical few-shot online anomaly detection and segmentation (FOADS) task.
Under the FOADS framework, models are trained on a few-shot normal dataset, followed by inspection and improvement of their capabilities by leveraging unlabeled streaming data containing both normal and abnormal samples simultaneously.
In order to achieve improved performance with limited training samples, we employ multi-scale feature embedding extracted from a CNN pre-trained on ImageNet to obtain a robust representation.
arXiv Detail & Related papers (2024-03-27T02:24:00Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Test-time Training for Data-efficient UCDR [22.400837122986175]
Universal Cross-domain Retrieval protocol is a pioneer in this field.
In this work, we explore the generalized retrieval problem in a data-efficient manner.
arXiv Detail & Related papers (2022-08-19T07:50:04Z) - FV-UPatches: Enhancing Universality in Finger Vein Recognition [0.6299766708197883]
We propose a universal learning-based framework, which achieves generalization while training with limited data.
The proposed framework shows application potential in other vein-based biometric recognition as well.
arXiv Detail & Related papers (2022-06-02T14:20:22Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Generalizable Person Re-Identification via Self-Supervised Batch Norm
Test-Time Adaption [63.7424680360004]
Batch Norm Test-time Adaption (BNTA) is a novel re-id framework that applies the self-supervised strategy to update BN parameters adaptively.
BNTA explores the domain-aware information within unlabeled target data before inference, and accordingly modulates the feature distribution normalized by BN to adapt to the target domain.
arXiv Detail & Related papers (2022-03-01T18:46:32Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - SelfReg: Self-supervised Contrastive Regularization for Domain
Generalization [7.512471799525974]
We propose a new regularization method for domain generalization based on contrastive learning, self-supervised contrastive regularization (SelfReg)
The proposed approach use only positive data pairs, thus it resolves various problems caused by negative pair sampling.
In the recent benchmark, DomainBed, the proposed method shows comparable performance to the conventional state-of-the-art alternatives.
arXiv Detail & Related papers (2021-04-20T09:08:29Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z) - Self-Challenging Improves Cross-Domain Generalization [81.99554996975372]
Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels.
We introduce a simple training, Self-Challenging Representation (RSC), that significantly improves the generalization of CNN to the out-of-domain data.
RSC iteratively challenges the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels.
arXiv Detail & Related papers (2020-07-05T21:42:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.