Structure-Preserving Synthesis: MaskGAN for Unpaired MR-CT Translation
- URL: http://arxiv.org/abs/2307.16143v2
- Date: Tue, 1 Aug 2023 01:18:10 GMT
- Title: Structure-Preserving Synthesis: MaskGAN for Unpaired MR-CT Translation
- Authors: Minh Hieu Phan, Zhibin Liao, Johan W. Verjans, Minh-Son To
- Abstract summary: MaskGAN is a novel framework that enforces structural consistency by utilizing automatically extracted coarse masks.
Our approach employs a mask generator to outline anatomical structures and a content generator to synthesize CT contents that align with these structures.
- Score: 6.154777164692837
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Medical image synthesis is a challenging task due to the scarcity of paired
data. Several methods have applied CycleGAN to leverage unpaired data, but they
often generate inaccurate mappings that shift the anatomy. This problem is
further exacerbated when the images from the source and target modalities are
heavily misaligned. Recently, current methods have aimed to address this issue
by incorporating a supplementary segmentation network. Unfortunately, this
strategy requires costly and time-consuming pixel-level annotations. To
overcome this problem, this paper proposes MaskGAN, a novel and cost-effective
framework that enforces structural consistency by utilizing automatically
extracted coarse masks. Our approach employs a mask generator to outline
anatomical structures and a content generator to synthesize CT contents that
align with these structures. Extensive experiments demonstrate that MaskGAN
outperforms state-of-the-art synthesis methods on a challenging pediatric
dataset, where MR and CT scans are heavily misaligned due to rapid growth in
children. Specifically, MaskGAN excels in preserving anatomical structures
without the need for expert annotations. The code for this paper can be found
at https://github.com/HieuPhan33/MaskGAN.
Related papers
- SimGen: A Diffusion-Based Framework for Simultaneous Surgical Image and Segmentation Mask Generation [1.9393128408121891]
generative AI models like text-to-image can alleviate data scarcity, incorporating spatial annotations, such as segmentation masks, is crucial for precision-driven surgical applications, simulation, and education.
This study introduces both a novel task and method, SimGen, for Simultaneous Image and Mask Generation.
SimGen is a diffusion model based on the DDPM framework and Residual U-Net, designed to jointly generate high-fidelity surgical images and their corresponding segmentation masks.
arXiv Detail & Related papers (2025-01-15T18:48:38Z) - HisynSeg: Weakly-Supervised Histopathological Image Segmentation via Image-Mixing Synthesis and Consistency Regularization [15.13875300007579]
HisynSeg is a weakly-supervised semantic segmentation framework based on image-mixing synthesis and consistency regularization.
HisynSeg achieves a state-of-the-art performance on three datasets.
arXiv Detail & Related papers (2024-12-30T13:10:48Z) - Mask Factory: Towards High-quality Synthetic Data Generation for Dichotomous Image Segmentation [70.95380821618711]
Dichotomous Image (DIS) tasks require highly precise annotations.
Current generative models and techniques struggle with the issues of scene deviations, noise-induced errors, and limited training sample variability.
We introduce a novel approach, which provides a scalable solution for generating diverse and precise datasets.
arXiv Detail & Related papers (2024-12-26T06:37:25Z) - MRGen: Diffusion-based Controllable Data Engine for MRI Segmentation towards Unannotated Modalities [59.61465292965639]
This paper investigates a new paradigm for leveraging generative models in medical applications.
We propose a diffusion-based data engine, termed MRGen, which enables generation conditioned on text prompts and masks.
arXiv Detail & Related papers (2024-12-04T16:34:22Z) - ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders [53.3185750528969]
Masked AutoEncoders (MAE) have emerged as a robust self-supervised framework.
We introduce a data-independent method, termed ColorMAE, which generates different binary mask patterns by filtering random noise.
We demonstrate our strategy's superiority in downstream tasks compared to random masking.
arXiv Detail & Related papers (2024-07-17T22:04:00Z) - Comprehensive Generative Replay for Task-Incremental Segmentation with Concurrent Appearance and Semantic Forgetting [49.87694319431288]
Generalist segmentation models are increasingly favored for diverse tasks involving various objects from different image sources.
We propose a Comprehensive Generative (CGR) framework that restores appearance and semantic knowledge by synthesizing image-mask pairs.
Experiments on incremental tasks (cardiac, fundus and prostate segmentation) show its clear advantage for alleviating concurrent appearance and semantic forgetting.
arXiv Detail & Related papers (2024-06-28T10:05:58Z) - COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images [3.5418498524791766]
This research is development of a novel counterfactual inpainting approach (COIN)
COIN flips the predicted classification label from abnormal to normal by using a generative model.
The effectiveness of the method is demonstrated by segmenting synthetic targets and actual kidney tumors from CT images acquired from Tartu University Hospital in Estonia.
arXiv Detail & Related papers (2024-04-19T12:09:49Z) - UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - Focus on Content not Noise: Improving Image Generation for Nuclei
Segmentation by Suppressing Steganography in CycleGAN [1.564260789348333]
We propose to remove the hidden shortcut information, called steganography, from generated images by employing a low pass filtering based on the DCT.
We achieve an improvement of 5.4 percentage points in the F1-score compared to a vanilla CycleGAN.
arXiv Detail & Related papers (2023-08-03T13:58:37Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - METGAN: Generative Tumour Inpainting and Modality Synthesis in Light
Sheet Microscopy [4.872960046536882]
We introduce a novel generative method which leverages real anatomical information to generate realistic image-label pairs of tumours.
We construct a dual-pathway generator, for the anatomical image and label, trained in a cycle-consistent setup, constrained by an independent, pretrained segmentor.
The generated images yield significant quantitative improvement compared to existing methods.
arXiv Detail & Related papers (2021-04-22T11:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.