Sketch guided and progressive growing GAN for realistic and editable
ultrasound image synthesis
- URL: http://arxiv.org/abs/2204.06929v1
- Date: Thu, 14 Apr 2022 12:50:18 GMT
- Title: Sketch guided and progressive growing GAN for realistic and editable
ultrasound image synthesis
- Authors: Jiamin Liang, Xin Yang, Yuhao Huang, Haoming Li, Shuangchi He, Xindi
Hu, Zejian Chen, Wufeng Xue, Jun Cheng, Dong Ni
- Abstract summary: We propose a generative adversarial network (GAN) based image synthesis framework.
We present the first work that can synthesize realistic B-mode US images with high-resolution and customized texture editing features.
In addition, a feature loss is proposed to minimize the difference of high-level features between the generated and real images.
- Score: 12.32829386817706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound (US) imaging is widely used for anatomical structure inspection in
clinical diagnosis. The training of new sonographers and deep learning based
algorithms for US image analysis usually requires a large amount of data.
However, obtaining and labeling large-scale US imaging data are not easy tasks,
especially for diseases with low incidence. Realistic US image synthesis can
alleviate this problem to a great extent. In this paper, we propose a
generative adversarial network (GAN) based image synthesis framework. Our main
contributions include: 1) we present the first work that can synthesize
realistic B-mode US images with high-resolution and customized texture editing
features; 2) to enhance structural details of generated images, we propose to
introduce auxiliary sketch guidance into a conditional GAN. We superpose the
edge sketch onto the object mask and use the composite mask as the network
input; 3) to generate high-resolution US images, we adopt a progressive
training strategy to gradually generate high-resolution images from
low-resolution images. In addition, a feature loss is proposed to minimize the
difference of high-level features between the generated and real images, which
further improves the quality of generated images; 4) the proposed US image
synthesis method is quite universal and can also be generalized to the US
images of other anatomical structures besides the three ones tested in our
study (lung, hip joint, and ovary); 5) extensive experiments on three large US
image datasets are conducted to validate our method. Ablation studies,
customized texture editing, user studies, and segmentation tests demonstrate
promising results of our method in synthesizing realistic US images.
Related papers
- MedSyn: Text-guided Anatomy-aware Synthesis of High-Fidelity 3D CT Images [22.455833806331384]
This paper introduces an innovative methodology for producing high-quality 3D lung CT images guided by textual information.
Current state-of-the-art approaches are limited to low-resolution outputs and underutilize radiology reports' abundant information.
arXiv Detail & Related papers (2023-10-05T14:16:22Z) - LOTUS: Learning to Optimize Task-based US representations [39.81131738128329]
Anatomical segmentation of organs in ultrasound images is essential to many clinical applications.
Existing deep neural networks require a large amount of labeled data for training in order to achieve clinically acceptable performance.
In this paper, we propose a novel approach for learning to optimize task-based ultra-sound image representations.
arXiv Detail & Related papers (2023-07-29T16:29:39Z) - MRIS: A Multi-modal Retrieval Approach for Image Synthesis on Diverse
Modalities [19.31577453889188]
We develop an approach based on multi-modal metric learning to synthesize images of diverse modalities.
We test our approach by synthesizing cartilage thickness maps obtained from 3D magnetic resonance (MR) images using 2D radiographs.
arXiv Detail & Related papers (2023-03-17T20:58:55Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Multi-Texture GAN: Exploring the Multi-Scale Texture Translation for
Brain MR Images [1.9163481966968943]
A significant percentage of existing algorithms cannot explicitly exploit and preserve texture details from target scanners.
In this paper, we design a multi-scale texture transfer to enrich the reconstruction images with more details.
Our method achieves superior results in inter-protocol or inter-scanner translation over state-of-the-art methods.
arXiv Detail & Related papers (2021-02-14T19:14:06Z) - Hierarchical Amortized Training for Memory-efficient High Resolution 3D
GAN [52.851990439671475]
We propose a novel end-to-end GAN architecture that can generate high-resolution 3D images.
We achieve this goal by using different configurations between training and inference.
Experiments on 3D thorax CT and brain MRI demonstrate that our approach outperforms state of the art in image generation.
arXiv Detail & Related papers (2020-08-05T02:33:04Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Synthesis and Edition of Ultrasound Images via Sketch Guided Progressive
Growing GANs [16.31231328779202]
In this paper, we devise a new framework for US image synthesis.
We firstly adopt a sketch generative adversarial networks (Sgan) to introduce background sketch upon object mask.
With enriched sketch cues, Sgan can generate realistic US images with editable and fine-grained structure details.
arXiv Detail & Related papers (2020-04-01T04:24:01Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.