Test-Time Adaptable Neural Networks for Robust Medical Image
Segmentation
- URL: http://arxiv.org/abs/2004.04668v4
- Date: Sat, 23 Jan 2021 16:14:08 GMT
- Title: Test-Time Adaptable Neural Networks for Robust Medical Image
Segmentation
- Authors: Neerav Karani, Ertunc Erdil, Krishna Chaitanya, and Ender Konukoglu
- Abstract summary: Convolutional Neural Networks (CNNs) work very well for supervised learning problems.
In medical image segmentation, this premise is violated when there is a mismatch between training and test images in terms of their acquisition details.
We design the segmentation CNN as a concatenation of two sub-networks: a relatively shallow image normalization CNN, followed by a deep CNN that segments the normalized image.
- Score: 9.372152932156293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional Neural Networks (CNNs) work very well for supervised learning
problems when the training dataset is representative of the variations expected
to be encountered at test time. In medical image segmentation, this premise is
violated when there is a mismatch between training and test images in terms of
their acquisition details, such as the scanner model or the protocol.
Remarkable performance degradation of CNNs in this scenario is well documented
in the literature. To address this problem, we design the segmentation CNN as a
concatenation of two sub-networks: a relatively shallow image normalization
CNN, followed by a deep CNN that segments the normalized image. We train both
these sub-networks using a training dataset, consisting of annotated images
from a particular scanner and protocol setting. Now, at test time, we adapt the
image normalization sub-network for \emph{each test image}, guided by an
implicit prior on the predicted segmentation labels. We employ an independently
trained denoising autoencoder (DAE) in order to model such an implicit prior on
plausible anatomical segmentation labels. We validate the proposed idea on
multi-center Magnetic Resonance imaging datasets of three anatomies: brain,
heart and prostate. The proposed test-time adaptation consistently provides
performance improvement, demonstrating the promise and generality of the
approach. Being agnostic to the architecture of the deep CNN, the second
sub-network, the proposed design can be utilized with any segmentation network
to increase robustness to variations in imaging scanners and protocols. Our
code is available at:
\url{https://github.com/neerakara/test-time-adaptable-neural-networks-for-domain-generalization}.
Related papers
- Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
of Image Segmentation Models [54.49581189337848]
We propose a method to enable the end-to-end pre-training for image segmentation models based on classification datasets.
The proposed method leverages a weighted segmentation learning procedure to pre-train the segmentation network en masse.
Experiment results show that, with ImageNet accompanied by PSSL as the source dataset, the proposed end-to-end pre-training strategy successfully boosts the performance of various segmentation models.
arXiv Detail & Related papers (2022-07-04T13:02:32Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - CapsNet for Medical Image Segmentation [8.612958742534673]
Convolutional Neural Networks (CNNs) have been successful in solving tasks in computer vision.
CNNs are sensitive to rotation and affine transformation and their success relies on huge-scale labeled datasets.
CapsNet is a new architecture that has achieved better robustness in representation learning.
arXiv Detail & Related papers (2022-03-16T21:15:07Z) - SegTransVAE: Hybrid CNN -- Transformer with Regularization for medical
image segmentation [0.0]
A novel network named SegTransVAE is proposed in this paper.
SegTransVAE is built upon encoder-decoder architecture, exploiting transformer with the variational autoencoder (VAE) branch to the network.
Evaluation on various recently introduced datasets shows that SegTransVAE outperforms previous methods in Dice Score and $95%$-Haudorff Distance.
arXiv Detail & Related papers (2022-01-21T08:02:55Z) - Semi-Supervised Medical Image Segmentation via Cross Teaching between
CNN and Transformer [11.381487613753004]
We present a framework for semi-supervised medical image segmentation by introducing the cross teaching between CNN and Transformer.
Notably, this work may be the first attempt to combine CNN and transformer for semi-supervised medical image segmentation and achieve promising results on a public benchmark.
arXiv Detail & Related papers (2021-12-09T13:22:38Z) - Weakly-supervised fire segmentation by visualizing intermediate CNN
layers [82.75113406937194]
Fire localization in images and videos is an important step for an autonomous system to combat fire incidents.
We consider weakly supervised segmentation of fire in images, in which only image labels are used to train the network.
We show that in the case of fire segmentation, which is a binary segmentation problem, the mean value of features in a mid-layer of classification CNN can perform better than conventional Class Activation Mapping (CAM) method.
arXiv Detail & Related papers (2021-11-16T11:56:28Z) - Convolution-Free Medical Image Segmentation using Transformers [8.130670465411239]
We show that a different method, based entirely on self-attention between neighboring image patches, can achieve competitive or better results.
We show that the proposed model can achieve segmentation accuracies that are better than the state of the art CNNs on three datasets.
arXiv Detail & Related papers (2021-02-26T18:49:13Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.