Learning Ultrasound Rendering from Cross-Sectional Model Slices for
Simulated Training
- URL: http://arxiv.org/abs/2101.08339v1
- Date: Wed, 20 Jan 2021 21:58:19 GMT
- Title: Learning Ultrasound Rendering from Cross-Sectional Model Slices for
Simulated Training
- Authors: Lin Zhang, Tiziano Portenier, Orcun Goksel
- Abstract summary: Computational simulations can facilitate the training of such skills in virtual reality.
We propose herein to bypass any rendering and simulation process at interactive time.
We use a generative adversarial framework with a dedicated generator architecture and input feeding scheme.
- Score: 13.640630434743837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose. Given the high level of expertise required for navigation and
interpretation of ultrasound images, computational simulations can facilitate
the training of such skills in virtual reality. With ray-tracing based
simulations, realistic ultrasound images can be generated. However, due to
computational constraints for interactivity, image quality typically needs to
be compromised.
Methods. We propose herein to bypass any rendering and simulation process at
interactive time, by conducting such simulations during a non-time-critical
offline stage and then learning image translation from cross-sectional model
slices to such simulated frames. We use a generative adversarial framework with
a dedicated generator architecture and input feeding scheme, which both
substantially improve image quality without increase in network parameters.
Integral attenuation maps derived from cross-sectional model slices,
texture-friendly strided convolutions, providing stochastic noise and input
maps to intermediate layers in order to preserve locality are all shown herein
to greatly facilitate such translation task.
Results. Given several quality metrics, the proposed method with only tissue
maps as input is shown to provide comparable or superior results to a
state-of-the-art that uses additional images of low-quality ultrasound
renderings. An extensive ablation study shows the need and benefits from the
individual contributions utilized in this work, based on qualitative examples
and quantitative ultrasound similarity metrics. To that end, a local histogram
statistics based error metric is proposed and demonstrated for visualization of
local dissimilarities between ultrasound images.
Related papers
- Cardiac ultrasound simulation for autonomous ultrasound navigation [4.036497185262817]
We propose a method to generate large amounts of ultrasound images from other modalities and from arbitrary positions.
We present a novel simulation pipeline which uses segmentations from other modalities, an optimized data representation and GPU-accelerated Monte Carlo path tracing.
The proposed approach allows for fast and accurate patient-specific ultrasound image generation, and its usability for training networks for navigation-related tasks is demonstrated.
arXiv Detail & Related papers (2024-02-09T15:14:48Z) - LOTUS: Learning to Optimize Task-based US representations [39.81131738128329]
Anatomical segmentation of organs in ultrasound images is essential to many clinical applications.
Existing deep neural networks require a large amount of labeled data for training in order to achieve clinically acceptable performance.
In this paper, we propose a novel approach for learning to optimize task-based ultra-sound image representations.
arXiv Detail & Related papers (2023-07-29T16:29:39Z) - APRF: Anti-Aliasing Projection Representation Field for Inverse Problem
in Imaging [74.9262846410559]
Sparse-view Computed Tomography (SVCT) reconstruction is an ill-posed inverse problem in imaging.
Recent works use Implicit Neural Representations (INRs) to build the coordinate-based mapping between sinograms and CT images.
We propose a self-supervised SVCT reconstruction method -- Anti-Aliasing Projection Representation Field (APRF)
APRF can build the continuous representation between adjacent projection views via the spatial constraints.
arXiv Detail & Related papers (2023-07-11T14:04:12Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Multi-scale Sparse Representation-Based Shadow Inpainting for Retinal
OCT Images [0.261990490798442]
Inpainting shadowed regions cast by superficial blood vessels in retinal optical coherence tomography ( OCT) images is critical for accurate and robust machine analysis and clinical diagnosis.
Traditional sequence-based approaches such as propagating neighboring information to gradually fill in the missing regions are cost-effective.
Deep learning-based methods such as encoder-decoder networks have shown promising results in natural image inpainting tasks.
We propose a novel multi-scale shadow inpainting framework for OCT images by synergically applying sparse representation and deep learning.
arXiv Detail & Related papers (2022-02-23T09:37:14Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Content-Preserving Unpaired Translation from Simulated to Realistic
Ultrasound Images [12.136874314973689]
We introduce a novel image translation framework to bridge the appearance gap between simulated images and real scans.
We achieve this goal by leveraging both simulated images with semantic segmentations and unpaired in-vivo ultrasound scans.
arXiv Detail & Related papers (2021-03-09T22:35:43Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Deep Image Translation for Enhancing Simulated Ultrasound Images [10.355140310235297]
Ultrasound simulation can provide an interactive environment for training sonographers as an educational tool.
Due to high computational demand, there is a trade-off between image quality and interactivity, potentially leading to sub-optimal results at interactive rates.
We introduce a deep learning approach based on adversarial training that mitigates this trade-off by improving the quality of simulated images with constant time.
arXiv Detail & Related papers (2020-06-18T21:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.