Generator Versus Segmentor: Pseudo-healthy Synthesis
- URL: http://arxiv.org/abs/2009.05722v3
- Date: Thu, 15 Jul 2021 13:59:39 GMT
- Title: Generator Versus Segmentor: Pseudo-healthy Synthesis
- Authors: Zhang Yunlong, Li Chenxin, Lin Xin, Sun Liyan, Zhuang Yihong, Huang
Yue, Ding Xinghao, Liu Xiaoqing, Yu Yizhou
- Abstract summary: We propose a novel adversarial training regime, Generator versus Segmentor (GVS), to alleviate this trade-off.
We also propose a new metric to measure how healthy the synthetic images look.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the problem of pseudo-healthy synthesis that is
defined as synthesizing a subject-specific pathology-free image from a
pathological one. Recent approaches based on Generative Adversarial Network
(GAN) have been developed for this task. However, these methods will inevitably
fall into the trade-off between preserving the subject-specific identity and
generating healthy-like appearances. To overcome this challenge, we propose a
novel adversarial training regime, Generator versus Segmentor (GVS), to
alleviate this trade-off by a divide-and-conquer strategy. We further consider
the deteriorating generalization performance of the segmentor throughout the
training and develop a pixel-wise weighted loss by muting the well-transformed
pixels to promote it. Moreover, we propose a new metric to measure how healthy
the synthetic images look. The qualitative and quantitative experiments on the
public dataset BraTS demonstrate that the proposed method outperforms the
existing methods. Besides, we also certify the effectiveness of our method on
datasets LiTS. Our implementation and pre-trained networks are publicly
available at https://github.com/Au3C2/Generator-Versus-Segmentor.
Related papers
- Generator Born from Classifier [66.56001246096002]
We aim to reconstruct an image generator, without relying on any data samples.
We propose a novel learning paradigm, in which the generator is trained to ensure that the convergence conditions of the network parameters are satisfied.
arXiv Detail & Related papers (2023-12-05T03:41:17Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Pattern Detection in the Activation Space for Identifying Synthesized
Content [8.365235325634876]
Generative Adversarial Networks (GANs) have recently achieved unprecedented success in photo-realistic image synthesis from low-dimensional random noise.
The ability to synthesize high-quality content at a large scale brings potential risks as the generated samples may lead to misinformation that can create severe social, political, health, and business hazards.
We propose SubsetGAN to identify generated content by detecting a subset of anomalous node-activations in the inner layers of pre-trained neural networks.
arXiv Detail & Related papers (2021-05-26T11:28:36Z) - METGAN: Generative Tumour Inpainting and Modality Synthesis in Light
Sheet Microscopy [4.872960046536882]
We introduce a novel generative method which leverages real anatomical information to generate realistic image-label pairs of tumours.
We construct a dual-pathway generator, for the anatomical image and label, trained in a cycle-consistent setup, constrained by an independent, pretrained segmentor.
The generated images yield significant quantitative improvement compared to existing methods.
arXiv Detail & Related papers (2021-04-22T11:18:17Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.