Structure-Preserving Synthesis: MaskGAN for Unpaired MR-CT Translation
- URL: http://arxiv.org/abs/2307.16143v2
- Date: Tue, 1 Aug 2023 01:18:10 GMT
- Title: Structure-Preserving Synthesis: MaskGAN for Unpaired MR-CT Translation
- Authors: Minh Hieu Phan, Zhibin Liao, Johan W. Verjans, Minh-Son To
- Abstract summary: MaskGAN is a novel framework that enforces structural consistency by utilizing automatically extracted coarse masks.
Our approach employs a mask generator to outline anatomical structures and a content generator to synthesize CT contents that align with these structures.
- Score: 6.154777164692837
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Medical image synthesis is a challenging task due to the scarcity of paired
data. Several methods have applied CycleGAN to leverage unpaired data, but they
often generate inaccurate mappings that shift the anatomy. This problem is
further exacerbated when the images from the source and target modalities are
heavily misaligned. Recently, current methods have aimed to address this issue
by incorporating a supplementary segmentation network. Unfortunately, this
strategy requires costly and time-consuming pixel-level annotations. To
overcome this problem, this paper proposes MaskGAN, a novel and cost-effective
framework that enforces structural consistency by utilizing automatically
extracted coarse masks. Our approach employs a mask generator to outline
anatomical structures and a content generator to synthesize CT contents that
align with these structures. Extensive experiments demonstrate that MaskGAN
outperforms state-of-the-art synthesis methods on a challenging pediatric
dataset, where MR and CT scans are heavily misaligned due to rapid growth in
children. Specifically, MaskGAN excels in preserving anatomical structures
without the need for expert annotations. The code for this paper can be found
at https://github.com/HieuPhan33/MaskGAN.
Related papers
- ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders [53.3185750528969]
Masked AutoEncoders (MAE) have emerged as a robust self-supervised framework.
We introduce a data-independent method, termed ColorMAE, which generates different binary mask patterns by filtering random noise.
We demonstrate our strategy's superiority in downstream tasks compared to random masking.
arXiv Detail & Related papers (2024-07-17T22:04:00Z) - Comprehensive Generative Replay for Task-Incremental Segmentation with Concurrent Appearance and Semantic Forgetting [49.87694319431288]
Generalist segmentation models are increasingly favored for diverse tasks involving various objects from different image sources.
We propose a Comprehensive Generative (CGR) framework that restores appearance and semantic knowledge by synthesizing image-mask pairs.
Experiments on incremental tasks (cardiac, fundus and prostate segmentation) show its clear advantage for alleviating concurrent appearance and semantic forgetting.
arXiv Detail & Related papers (2024-06-28T10:05:58Z) - COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images [3.5418498524791766]
This research is development of a novel counterfactual inpainting approach (COIN)
COIN flips the predicted classification label from abnormal to normal by using a generative model.
The effectiveness of the method is demonstrated by segmenting synthetic targets and actual kidney tumors from CT images acquired from Tartu University Hospital in Estonia.
arXiv Detail & Related papers (2024-04-19T12:09:49Z) - GuideGen: A Text-guided Framework for Joint CT Volume and Anatomical
structure Generation [2.062999694458006]
We present textbfGuideGen: a pipeline that jointly generates CT images and tissue masks for abdominal organs and colorectal cancer conditioned on a text prompt.
Our pipeline guarantees high fidelity and variability as well as exact alignment between generated CT volumes and tissue masks.
arXiv Detail & Related papers (2024-03-12T02:09:39Z) - UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - CL-MAE: Curriculum-Learned Masked Autoencoders [49.24994655813455]
We propose a curriculum learning approach that updates the masking strategy to continually increase the complexity of the self-supervised reconstruction task.
We train our Curriculum-Learned Masked Autoencoder (CL-MAE) on ImageNet and show that it exhibits superior representation learning capabilities compared to MAE.
arXiv Detail & Related papers (2023-08-31T09:13:30Z) - Focus on Content not Noise: Improving Image Generation for Nuclei
Segmentation by Suppressing Steganography in CycleGAN [1.564260789348333]
We propose to remove the hidden shortcut information, called steganography, from generated images by employing a low pass filtering based on the DCT.
We achieve an improvement of 5.4 percentage points in the F1-score compared to a vanilla CycleGAN.
arXiv Detail & Related papers (2023-08-03T13:58:37Z) - Less is More: Unsupervised Mask-guided Annotated CT Image Synthesis with
Minimum Manual Segmentations [2.1785903900600316]
We propose a novel strategy for medical image synthesis, namely Unsupervised Mask (UM)-guided synthesis.
UM-guided synthesis provided high-quality synthetic images with significantly higher fidelity, variety, and utility.
arXiv Detail & Related papers (2023-03-19T20:30:35Z) - GD-MAE: Generative Decoder for MAE Pre-training on LiDAR Point Clouds [72.60362979456035]
Masked Autoencoders (MAE) are challenging to explore in large-scale 3D point clouds.
We propose a textbfGenerative textbfDecoder for MAE (GD-MAE) to automatically merges the surrounding context.
We demonstrate the efficacy of the proposed method on several large-scale benchmarks: KITTI, and ONCE.
arXiv Detail & Related papers (2022-12-06T14:32:55Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - METGAN: Generative Tumour Inpainting and Modality Synthesis in Light
Sheet Microscopy [4.872960046536882]
We introduce a novel generative method which leverages real anatomical information to generate realistic image-label pairs of tumours.
We construct a dual-pathway generator, for the anatomical image and label, trained in a cycle-consistent setup, constrained by an independent, pretrained segmentor.
The generated images yield significant quantitative improvement compared to existing methods.
arXiv Detail & Related papers (2021-04-22T11:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.