Domain Adaptive Relational Reasoning for 3D Multi-Organ Segmentation
- URL: http://arxiv.org/abs/2005.09120v2
- Date: Sat, 11 Jul 2020 20:23:19 GMT
- Title: Domain Adaptive Relational Reasoning for 3D Multi-Organ Segmentation
- Authors: Shuhao Fu, Yongyi Lu, Yan Wang, Yuyin Zhou, Wei Shen, Elliot Fishman,
Alan Yuille
- Abstract summary: Our method is inspired by the fact that the spatial relationship between internal structures in medical images is relatively fixed.
We formulate the spatial relationship by solving a jigsaw puzzle task, recovering a CT scan from its shuffled patches, and jointly train it with the organ segmentation task.
To guarantee the transferability of the learned spatial relationship to multiple domains, we introduce two schemes: 1) Employing a super-resolution network also jointly trained with the segmentation model to standardize medical images from different domain to a certain spatial resolution; 2) Adapting the spatial relationship for a test image by test-time jigsaw puzzle training.
- Score: 17.504340316130023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a novel unsupervised domain adaptation (UDA)
method, named Domain Adaptive Relational Reasoning (DARR), to generalize 3D
multi-organ segmentation models to medical data collected from different
scanners and/or protocols (domains). Our method is inspired by the fact that
the spatial relationship between internal structures in medical images is
relatively fixed, e.g., a spleen is always located at the tail of a pancreas,
which serves as a latent variable to transfer the knowledge shared across
multiple domains. We formulate the spatial relationship by solving a jigsaw
puzzle task, i.e., recovering a CT scan from its shuffled patches, and jointly
train it with the organ segmentation task. To guarantee the transferability of
the learned spatial relationship to multiple domains, we additionally introduce
two schemes: 1) Employing a super-resolution network also jointly trained with
the segmentation model to standardize medical images from different domain to a
certain spatial resolution; 2) Adapting the spatial relationship for a test
image by test-time jigsaw puzzle training. Experimental results show that our
method improves the performance by 29.60% DSC on target datasets on average
without using any data from the target domain during training.
Related papers
- Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Unsupervised Domain Adaptation with Contrastive Learning for OCT
Segmentation [49.59567529191423]
We propose a novel semi-supervised learning framework for segmentation of volumetric images from new unlabeled domains.
We jointly use supervised and contrastive learning, also introducing a contrastive pairing scheme that leverages similarity between nearby slices in 3D.
arXiv Detail & Related papers (2022-03-07T19:02:26Z) - Unsupervised Domain Adaptation for Cross-Modality Retinal Vessel
Segmentation via Disentangling Representation Style Transfer and
Collaborative Consistency Learning [3.9562534927482704]
We propose DCDA, a novel cross-modality unsupervised domain adaptation framework for tasks with large domain shifts.
Our framework achieves Dice scores close to target-trained oracle both from OCTA to OCT and from OCT to OCTA, significantly outperforming other state-of-the-art methods.
arXiv Detail & Related papers (2022-01-13T07:03:16Z) - Causality-inspired Single-source Domain Generalization for Medical Image
Segmentation [12.697945585457441]
We propose a simple data augmentation approach to expose a segmentation model to synthesized domain-shifted training examples.
Specifically, 1) to make the deep model robust to discrepancies in image intensities and textures, we employ a family of randomly-weighted shallow networks.
We remove spurious correlations among objects in an image that might be taken by the network as domain-specific clues for making predictions, and they may break on unseen domains.
arXiv Detail & Related papers (2021-11-24T14:45:17Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation [10.417009344120917]
We present a novel UDA method for multi-modal cardiac image segmentation.
The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces.
We validated our method on two cardiac datasets by adapting from the annotated source domain to the unannotated target domain.
arXiv Detail & Related papers (2021-03-15T08:59:44Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image
Segmentation on Unseen Datasets [96.92018649136217]
We present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains.
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains.
Our framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
arXiv Detail & Related papers (2020-10-13T07:28:39Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z) - Domain Adaptive Medical Image Segmentation via Adversarial Learning of
Disease-Specific Spatial Patterns [6.298270929323396]
We propose an unsupervised domain adaptation framework for boosting image segmentation performance across multiple domains.
We enforce architectures to be adaptive to new data by rejecting improbable segmentation patterns and implicitly learning through semantic and boundary information.
We demonstrate that recalibrating the deep networks on a few unlabeled images from the target domain improves the segmentation accuracy significantly.
arXiv Detail & Related papers (2020-01-25T13:48:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.