Content-Preserving Unpaired Translation from Simulated to Realistic
Ultrasound Images
- URL: http://arxiv.org/abs/2103.05745v1
- Date: Tue, 9 Mar 2021 22:35:43 GMT
- Title: Content-Preserving Unpaired Translation from Simulated to Realistic
Ultrasound Images
- Authors: Devavrat Tomar, Lin Zhang, Tiziano Portenier, Orcun Goksel
- Abstract summary: We introduce a novel image translation framework to bridge the appearance gap between simulated images and real scans.
We achieve this goal by leveraging both simulated images with semantic segmentations and unpaired in-vivo ultrasound scans.
- Score: 12.136874314973689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interactive simulation of ultrasound imaging greatly facilitates sonography
training. Although ray-tracing based methods have shown promising results,
obtaining realistic images requires substantial modeling effort and manual
parameter tuning. In addition, current techniques still result in a significant
appearance gap between simulated images and real clinical scans. In this work
we introduce a novel image translation framework to bridge this appearance gap,
while preserving the anatomical layout of the simulated scenes. We achieve this
goal by leveraging both simulated images with semantic segmentations and
unpaired in-vivo ultrasound scans. Our framework is based on recent contrastive
unpaired translation techniques and we propose a regularization approach by
learning an auxiliary segmentation-to-real image translation task, which
encourages the disentanglement of content and style. In addition, we extend the
generator to be class-conditional, which enables the incorporation of
additional losses, in particular a cyclic consistency loss, to further improve
the translation quality. Qualitative and quantitative comparisons against
state-of-the-art unpaired translation methods demonstrate the superiority of
our proposed framework.
Related papers
- From Real Artifacts to Virtual Reference: A Robust Framework for Translating Endoscopic Images [27.230439605570812]
In endoscopic imaging, combining pre-operative data with intra-operative imaging is important for surgical planning and navigation.
Existing domain adaptation methods are hampered by distribution shift caused by in vivo artifacts.
This paper presents an artifact-resilient image translation method and an associated benchmark for this purpose.
arXiv Detail & Related papers (2024-10-15T02:41:52Z) - Exploring Semantic Consistency in Unpaired Image Translation to Generate
Data for Surgical Applications [1.8011391924021904]
This study empirically investigates unpaired image translation methods for generating suitable data in surgical applications.
We find that a simple combination of structural-similarity loss and contrastive learning yields the most promising results.
arXiv Detail & Related papers (2023-09-06T14:43:22Z) - Improving Diffusion-based Image Translation using Asymmetric Gradient
Guidance [51.188396199083336]
We present an approach that guides the reverse process of diffusion sampling by applying asymmetric gradient guidance.
Our model's adaptability allows it to be implemented with both image-fusion and latent-dif models.
Experiments show that our method outperforms various state-of-the-art models in image translation tasks.
arXiv Detail & Related papers (2023-06-07T12:56:56Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Unbalanced Feature Transport for Exemplar-based Image Translation [51.54421432912801]
This paper presents a general image translation framework that incorporates optimal transport for feature alignment between conditional inputs and style exemplars in image translation.
We show that our method achieves superior image translation qualitatively and quantitatively as compared with the state-of-the-art.
arXiv Detail & Related papers (2021-06-19T12:07:48Z) - Smoothing the Disentangled Latent Style Space for Unsupervised
Image-to-Image Translation [56.55178339375146]
Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic results.
We propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space.
arXiv Detail & Related papers (2021-06-16T17:58:21Z) - Segmentation-Renormalized Deep Feature Modulation for Unpaired Image
Harmonization [0.43012765978447565]
Cycle-consistent Generative Adversarial Networks have been used to harmonize image sets between a source and target domain.
These methods are prone to instability, contrast inversion, intractable manipulation of pathology, and steganographic mappings which limit their reliable adoption in real-world medical imaging.
We propose a segmentation-renormalized image translation framework to reduce inter-scanner harmonization while preserving anatomical layout.
arXiv Detail & Related papers (2021-02-11T23:53:51Z) - Learning Ultrasound Rendering from Cross-Sectional Model Slices for
Simulated Training [13.640630434743837]
Computational simulations can facilitate the training of such skills in virtual reality.
We propose herein to bypass any rendering and simulation process at interactive time.
We use a generative adversarial framework with a dedicated generator architecture and input feeding scheme.
arXiv Detail & Related papers (2021-01-20T21:58:19Z) - Style Intervention: How to Achieve Spatial Disentanglement with
Style-based Generators? [100.60938767993088]
We propose a lightweight optimization-based algorithm which could adapt to arbitrary input images and render natural translation effects under flexible objectives.
We verify the performance of the proposed framework in facial attribute editing on high-resolution images, where both photo-realism and consistency are required.
arXiv Detail & Related papers (2020-11-19T07:37:31Z) - Deep Image Translation for Enhancing Simulated Ultrasound Images [10.355140310235297]
Ultrasound simulation can provide an interactive environment for training sonographers as an educational tool.
Due to high computational demand, there is a trade-off between image quality and interactivity, potentially leading to sub-optimal results at interactive rates.
We introduce a deep learning approach based on adversarial training that mitigates this trade-off by improving the quality of simulated images with constant time.
arXiv Detail & Related papers (2020-06-18T21:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.