Synthesis and Edition of Ultrasound Images via Sketch Guided Progressive
Growing GANs
- URL: http://arxiv.org/abs/2004.00226v1
- Date: Wed, 1 Apr 2020 04:24:01 GMT
- Title: Synthesis and Edition of Ultrasound Images via Sketch Guided Progressive
Growing GANs
- Authors: Jiamin Liang, Xin Yang, Haoming Li, Yi Wang, Manh The Van, Haoran Dou,
Chaoyu Chen, Jinghui Fang, Xiaowen Liang, Zixin Mai, Guowen Zhu, Zhiyi Chen,
Dong Ni
- Abstract summary: In this paper, we devise a new framework for US image synthesis.
We firstly adopt a sketch generative adversarial networks (Sgan) to introduce background sketch upon object mask.
With enriched sketch cues, Sgan can generate realistic US images with editable and fine-grained structure details.
- Score: 16.31231328779202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound (US) is widely accepted in clinic for anatomical structure
inspection. However, lacking in resources to practice US scan, novices often
struggle to learn the operation skills. Also, in the deep learning era,
automated US image analysis is limited by the lack of annotated samples.
Efficiently synthesizing realistic, editable and high resolution US images can
solve the problems. The task is challenging and previous methods can only
partially complete it. In this paper, we devise a new framework for US image
synthesis. Particularly, we firstly adopt a sketch generative adversarial
networks (Sgan) to introduce background sketch upon object mask in a
conditioned generative adversarial network. With enriched sketch cues, Sgan can
generate realistic US images with editable and fine-grained structure details.
Although effective, Sgan is hard to generate high resolution US images. To
achieve this, we further implant the Sgan into a progressive growing scheme
(PGSgan). By smoothly growing both generator and discriminator, PGSgan can
gradually synthesize US images from low to high resolution. By synthesizing
ovary and follicle US images, our extensive perceptual evaluation, user study
and segmentation results prove the promising efficacy and efficiency of the
proposed PGSgan.
Related papers
- Diffusion-based generation of Histopathological Whole Slide Images at a
Gigapixel scale [10.481781668319886]
Synthetic Whole Slide Images (WSIs) can augment training datasets to enhance the performance of many computational applications.
No existing deep-learning-based method generates WSIs at their typically high resolutions.
We present a novel coarse-to-fine sampling scheme to tackle image generation of high-resolution WSIs.
arXiv Detail & Related papers (2023-11-14T14:33:39Z) - Generalizable Synthetic Image Detection via Language-guided Contrastive
Learning [22.4158195581231]
malevolent use of synthetic images, such as the dissemination of fake news or the creation of fake profiles, raises significant concerns regarding the authenticity of images.
We propose a simple yet very effective synthetic image detection method via a language-guided contrastive learning and a new formulation of the detection problem.
It is shown that our proposed LanguAge-guided SynThEsis Detection (LASTED) model achieves much improved generalizability to unseen image generation models.
arXiv Detail & Related papers (2023-05-23T08:13:27Z) - Ultra-NeRF: Neural Radiance Fields for Ultrasound Imaging [40.72047687523214]
We present a physics-enhanced implicit neural representation (INR) for ultrasound (US) imaging that learns tissue properties from overlapping US sweeps.
Our proposed method leverages a ray-tracing-based neural rendering for novel view US synthesis.
arXiv Detail & Related papers (2023-01-25T11:02:09Z) - Frido: Feature Pyramid Diffusion for Complex Scene Image Synthesis [77.23998762763078]
We present Frido, a Feature Pyramid Diffusion model performing a multi-scale coarse-to-fine denoising process for image synthesis.
Our model decomposes an input image into scale-dependent vector quantized features, followed by a coarse-to-fine gating for producing image output.
We conduct extensive experiments over various unconditioned and conditional image generation tasks, ranging from text-to-image synthesis, layout-to-image, scene-graph-to-image, to label-to-image.
arXiv Detail & Related papers (2022-08-29T17:37:29Z) - Sketch guided and progressive growing GAN for realistic and editable
ultrasound image synthesis [12.32829386817706]
We propose a generative adversarial network (GAN) based image synthesis framework.
We present the first work that can synthesize realistic B-mode US images with high-resolution and customized texture editing features.
In addition, a feature loss is proposed to minimize the difference of high-level features between the generated and real images.
arXiv Detail & Related papers (2022-04-14T12:50:18Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - You Only Need Adversarial Supervision for Semantic Image Synthesis [84.83711654797342]
We propose a novel, simplified GAN model, which needs only adversarial supervision to achieve high quality results.
We show that images synthesized by our model are more diverse and follow the color and texture of real images more closely.
arXiv Detail & Related papers (2020-12-08T23:00:48Z) - Screen Tracking for Clinical Translation of Live Ultrasound Image
Analysis Methods [2.5805793749729857]
The proposed method captures the US image by tracking the screen with a camera fixed at the sonographer's view point and reformats the captured image to the right aspect ratio.
It is hypothesized that this would enable to input such retrieved image into an image processing pipeline to extract information that can help improve the examination.
This information could eventually be projected back to the sonographer's field of view in real time using, for example, an augmented reality (AR) headset.
arXiv Detail & Related papers (2020-07-13T09:53:20Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.