3D Segmentation Guided Style-based Generative Adversarial Networks for
PET Synthesis
- URL: http://arxiv.org/abs/2205.08887v1
- Date: Wed, 18 May 2022 12:19:17 GMT
- Title: 3D Segmentation Guided Style-based Generative Adversarial Networks for
PET Synthesis
- Authors: Yang Zhou, Zhiwen Yang, Hui Zhang, Eric I-Chao Chang, Yubo Fan, Yan Xu
- Abstract summary: Potential radioactive hazards in full-dose positron emission tomography (PET) imaging remain a concern.
It is of great interest to translate low-dose PET images into full-dose.
We propose a novel segmentation guided style-based generative adversarial network (SGSGAN) for PET synthesis.
- Score: 11.615097017030843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Potential radioactive hazards in full-dose positron emission tomography (PET)
imaging remain a concern, whereas the quality of low-dose images is never
desirable for clinical use. So it is of great interest to translate low-dose
PET images into full-dose. Previous studies based on deep learning methods
usually directly extract hierarchical features for reconstruction. We notice
that the importance of each feature is different and they should be weighted
dissimilarly so that tiny information can be captured by the neural network.
Furthermore, the synthesis on some regions of interest is important in some
applications. Here we propose a novel segmentation guided style-based
generative adversarial network (SGSGAN) for PET synthesis. (1) We put forward a
style-based generator employing style modulation, which specifically controls
the hierarchical features in the translation process, to generate images with
more realistic textures. (2) We adopt a task-driven strategy that couples a
segmentation task with a generative adversarial network (GAN) framework to
improve the translation performance. Extensive experiments show the superiority
of our overall framework in PET synthesis, especially on those regions of
interest.
Related papers
- Image2Points:A 3D Point-based Context Clusters GAN for High-Quality PET
Image Reconstruction [47.398304117228584]
We propose a 3D point-based context clusters GAN, namely PCC-GAN, to reconstruct high-quality SPET images from LPET.
Experiments on both clinical and phantom datasets demonstrate that our PCC-GAN outperforms the state-of-the-art reconstruction methods.
arXiv Detail & Related papers (2024-02-01T06:47:56Z) - PET Synthesis via Self-supervised Adaptive Residual Estimation
Generative Adversarial Network [14.381830012670969]
Recent methods to generate high-quality PET images from low-dose counterparts have been reported to be state-of-the-art for low-to-high image recovery methods.
To address these issues, we developed a self-supervised adaptive residual estimation generative adversarial network (SS-AEGAN)
SS-AEGAN consistently outperformed the state-of-the-art synthesis methods with various dose reduction factors.
arXiv Detail & Related papers (2023-10-24T06:43:56Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - MedSyn: Text-guided Anatomy-aware Synthesis of High-Fidelity 3D CT Images [22.455833806331384]
This paper introduces an innovative methodology for producing high-quality 3D lung CT images guided by textual information.
Current state-of-the-art approaches are limited to low-resolution outputs and underutilize radiology reports' abundant information.
arXiv Detail & Related papers (2023-10-05T14:16:22Z) - CG-3DSRGAN: A classification guided 3D generative adversarial network
for image quality recovery from low-dose PET images [10.994223928445589]
High radioactivity caused by the injected tracer dose is a major concern in PET imaging.
Reducing the dose leads to inadequate image quality for diagnostic practice.
CNNs-based methods have been developed for high quality PET synthesis from its low-dose counterparts.
arXiv Detail & Related papers (2023-04-03T05:39:02Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - PhysFormer: Facial Video-based Physiological Measurement with Temporal
Difference Transformer [55.936527926778695]
Recent deep learning approaches focus on mining subtle r clues using convolutional neural networks with limited-temporal receptive fields.
In this paper, we propose the PhysFormer, an end-to-end video transformer based architecture.
arXiv Detail & Related papers (2021-11-23T18:57:11Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Bidirectional Mapping Generative Adversarial Networks for Brain MR to
PET Synthesis [29.40385887130174]
We propose a 3D end-to-end synthesis network, called Bidirectional Mapping Generative Adversarial Networks (BMGAN)
The proposed method can synthesize the perceptually realistic PET images while preserving the diverse brain structures of different subjects.
arXiv Detail & Related papers (2020-08-08T09:27:48Z) - Automatic Ischemic Stroke Lesion Segmentation from Computed Tomography
Perfusion Images by Image Synthesis and Attention-Based Deep Neural Networks [15.349968422713218]
Stroke lesion segmentation is important for accurate diagnosis of stroke in acute care units.
It is challenged by low image contrast and resolution of the perfusion parameter maps.
We propose a framework based on synthesized pseudo-Weighted Imaging from perfusion parameter maps to obtain better image quality.
arXiv Detail & Related papers (2020-07-07T09:19:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.