Multi-Domain Image Completion for Random Missing Input Data
- URL: http://arxiv.org/abs/2007.05534v1
- Date: Fri, 10 Jul 2020 16:38:48 GMT
- Title: Multi-Domain Image Completion for Random Missing Input Data
- Authors: Liyue Shen, Wentao Zhu, Xiaosong Wang, Lei Xing, John M. Pauly, Baris
Turkbey, Stephanie Anne Harmon, Thomas Hogue Sanford, Sherif Mehralivand,
Peter Choyke, Bradford Wood, Daguang Xu
- Abstract summary: Multi-domain data are widely leveraged in vision applications taking advantage of complementary information from different modalities.
Due to possible data corruption and different imaging protocols, the availability of images for each domain could vary amongst multiple data sources.
We propose a general approach to complete the random missing domain(s) data in real applications.
- Score: 17.53581223279953
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-domain data are widely leveraged in vision applications taking
advantage of complementary information from different modalities, e.g., brain
tumor segmentation from multi-parametric magnetic resonance imaging (MRI).
However, due to possible data corruption and different imaging protocols, the
availability of images for each domain could vary amongst multiple data sources
in practice, which makes it challenging to build a universal model with a
varied set of input data. To tackle this problem, we propose a general approach
to complete the random missing domain(s) data in real applications.
Specifically, we develop a novel multi-domain image completion method that
utilizes a generative adversarial network (GAN) with a representational
disentanglement scheme to extract shared skeleton encoding and separate flesh
encoding across multiple domains. We further illustrate that the learned
representation in multi-domain image completion could be leveraged for
high-level tasks, e.g., segmentation, by introducing a unified framework
consisting of image completion and segmentation with a shared content encoder.
The experiments demonstrate consistent performance improvement on three
datasets for brain tumor segmentation, prostate segmentation, and facial
expression image completion respectively.
Related papers
- IGUANe: a 3D generalizable CycleGAN for multicenter harmonization of
brain MR images [0.0]
Deep learning methods for image translation have emerged as a solution for harmonizing MR images across sites.
In this study, we introduce IGUANe, an original 3D model that leverages the strengths of domain translation.
The model can be applied to any image, even from an unknown acquisition site.
arXiv Detail & Related papers (2024-02-05T17:38:49Z) - Unsupervised Federated Domain Adaptation for Segmentation of MRI Images [20.206972068340843]
We develop a method for unsupervised federated domain adaptation using multiple annotated source domains.
Our approach enables the transfer of knowledge from several annotated source domains to adapt a model for effective use in an unannotated target domain.
arXiv Detail & Related papers (2024-01-02T00:31:41Z) - Generalizable Medical Image Segmentation via Random Amplitude Mixup and
Domain-Specific Image Restoration [17.507951655445652]
We present a novel generalizable medical image segmentation method.
To be specific, we design our approach as a multi-task paradigm by combining the segmentation model with a self-supervision domain-specific image restoration module.
We demonstrate the performance of our method on two public generalizable segmentation benchmarks in medical images.
arXiv Detail & Related papers (2022-08-08T03:56:20Z) - High-Quality Pluralistic Image Completion via Code Shared VQGAN [51.7805154545948]
We present a novel framework for pluralistic image completion that can achieve both high quality and diversity at much faster inference speed.
Our framework is able to learn semantically-rich discrete codes efficiently and robustly, resulting in much better image reconstruction quality.
arXiv Detail & Related papers (2022-04-05T01:47:35Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image
Segmentation on Unseen Datasets [96.92018649136217]
We present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains.
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains.
Our framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
arXiv Detail & Related papers (2020-10-13T07:28:39Z) - SoloGAN: Multi-domain Multimodal Unpaired Image-to-Image Translation via
a Single Generative Adversarial Network [4.7344504314446345]
We present a flexible and general SoloGAN model for efficient multimodal I2I translation among multiple domains with unpaired data.
In contrast to existing methods, the SoloGAN algorithm uses a single projection discriminator with an additional auxiliary classifier and shares the encoder and generator for all domains.
arXiv Detail & Related papers (2020-08-04T16:31:15Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.