Style-invariant Cardiac Image Segmentation with Test-time Augmentation
- URL: http://arxiv.org/abs/2009.12193v1
- Date: Thu, 24 Sep 2020 08:27:40 GMT
- Title: Style-invariant Cardiac Image Segmentation with Test-time Augmentation
- Authors: Xiaoqiong Huang, Zejian Chen, Xin Yang, Zhendong Liu, Yuxin Zou,
Mingyuan Luo, Wufeng Xue, Dong Ni
- Abstract summary: Deep models often suffer from severe performance drop due to the appearance shift in the real clinical setting.
In this paper, we propose a novel style-invariant method for cardiac image segmentation.
- Score: 10.234493507401618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep models often suffer from severe performance drop due to the appearance
shift in the real clinical setting. Most of the existing learning-based methods
rely on images from multiple sites/vendors or even corresponding labels.
However, collecting enough unknown data to robustly model segmentation cannot
always hold since the complex appearance shift caused by imaging factors in
daily application. In this paper, we propose a novel style-invariant method for
cardiac image segmentation. Based on the zero-shot style transfer to remove
appearance shift and test-time augmentation to explore diverse underlying
anatomy, our proposed method is effective in combating the appearance shift.
Our contribution is three-fold. First, inspired by the spirit of universal
style transfer, we develop a zero-shot stylization for content images to
generate stylized images that appearance similarity to the style images.
Second, we build up a robust cardiac segmentation model based on the U-Net
structure. Our framework mainly consists of two networks during testing: the ST
network for removing appearance shift and the segmentation network. Third, we
investigate test-time augmentation to explore transformed versions of the
stylized image for prediction and the results are merged. Notably, our proposed
framework is fully test-time adaptation. Experiment results demonstrate that
our methods are promising and generic for generalizing deep segmentation
models.
Related papers
- Style-Extracting Diffusion Models for Semi-Supervised Histopathology Segmentation [6.479933058008389]
Style-Extracting Diffusion Models generate images with unseen characteristics beneficial for downstream tasks.
In this work, we show the capability of our method on a natural image dataset as a proof-of-concept.
We verify the added value of the generated images by showing improved segmentation results and lower performance variability between patients.
arXiv Detail & Related papers (2024-03-21T14:36:59Z) - MoreStyle: Relax Low-frequency Constraint of Fourier-based Image Reconstruction in Generalizable Medical Image Segmentation [53.24011398381715]
We introduce a Plug-and-Play module for data augmentation called MoreStyle.
MoreStyle diversifies image styles by relaxing low-frequency constraints in Fourier space.
With the help of adversarial learning, MoreStyle pinpoints the most intricate style combinations within latent features.
arXiv Detail & Related papers (2024-03-18T11:38:47Z) - Weakly supervised segmentation with point annotations for histopathology
images via contrast-based variational model [7.021021047695508]
We propose a contrast-based variational model to generate segmentation results for histopathology images.
The proposed method considers the common characteristics of target regions in histopathology images and can be trained in an end-to-end manner.
It can generate more regionally consistent and smoother boundary segmentation, and is more robust to unlabeled novel' regions.
arXiv Detail & Related papers (2023-04-07T10:12:21Z) - Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance
Consistency [59.427074701985795]
Single-view reconstruction typically rely on viewpoint annotations, silhouettes, the absence of background, multiple views of the same instance, a template shape, or symmetry.
We avoid all of these supervisions and hypotheses by leveraging explicitly the consistency between images of different object instances.
Our main contributions are two approaches to leverage cross-instance consistency: (i) progressive conditioning, a training strategy to gradually specialize the model from category to instances in a curriculum learning fashion; (ii) swap reconstruction, a loss enforcing consistency between instances having similar shape or texture.
arXiv Detail & Related papers (2022-04-21T17:47:35Z) - One-shot Weakly-Supervised Segmentation in Medical Images [12.184590794655517]
We present an innovative framework for 3D medical image segmentation with one-shot and weakly-supervised settings.
A propagation-reconstruction network is proposed to project scribbles from annotated volume to unlabeled 3D images.
A dual-level feature denoising module is designed to refine the scribbles based on anatomical- and pixel-level features.
arXiv Detail & Related papers (2021-11-21T09:14:13Z) - Learning Contrastive Representation for Semantic Correspondence [150.29135856909477]
We propose a multi-level contrastive learning approach for semantic matching.
We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects.
arXiv Detail & Related papers (2021-09-22T18:34:14Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Generalize Ultrasound Image Segmentation via Instant and Plug & Play
Style Transfer [65.71330448991166]
Deep segmentation models generalize to images with unknown appearance.
Retraining models leads to high latency and complex pipelines.
We propose a novel method for robust segmentation under unknown appearance shifts.
arXiv Detail & Related papers (2021-01-11T05:45:30Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Remove Appearance Shift for Ultrasound Image Segmentation via Fast and
Universal Style Transfer [13.355791568003559]
We propose a novel and intuitive framework to remove the appearance shift, and hence improve the generalization ability of Deep Neural Networks (DNNs)
We follow the spirit of universal style transfer to remove appearance shifts, which was not explored before for US images.
Our framework achieved real-time speed required in the clinical US scanning.
arXiv Detail & Related papers (2020-02-14T02:00:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.