Ultra-NeRF: Neural Radiance Fields for Ultrasound Imaging
- URL: http://arxiv.org/abs/2301.10520v2
- Date: Tue, 11 Apr 2023 08:16:55 GMT
- Title: Ultra-NeRF: Neural Radiance Fields for Ultrasound Imaging
- Authors: Magdalena Wysocki, Mohammad Farid Azampour, Christine Eilers, Benjamin
Busam, Mehrdad Salehi, Nassir Navab
- Abstract summary: We present a physics-enhanced implicit neural representation (INR) for ultrasound (US) imaging that learns tissue properties from overlapping US sweeps.
Our proposed method leverages a ray-tracing-based neural rendering for novel view US synthesis.
- Score: 40.72047687523214
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a physics-enhanced implicit neural representation (INR) for
ultrasound (US) imaging that learns tissue properties from overlapping US
sweeps. Our proposed method leverages a ray-tracing-based neural rendering for
novel view US synthesis. Recent publications demonstrated that INR models could
encode a representation of a three-dimensional scene from a set of
two-dimensional US frames. However, these models fail to consider the
view-dependent changes in appearance and geometry intrinsic to US imaging. In
our work, we discuss direction-dependent changes in the scene and show that a
physics-inspired rendering improves the fidelity of US image synthesis. In
particular, we demonstrate experimentally that our proposed method generates
geometrically accurate B-mode images for regions with ambiguous representation
owing to view-dependent differences of the US images. We conduct our
experiments using simulated B-mode US sweeps of the liver and acquired US
sweeps of a spine phantom tracked with a robotic arm. The experiments
corroborate that our method generates US frames that enable consistent volume
compounding from previously unseen views. To the best of our knowledge, the
presented work is the first to address view-dependent US image synthesis using
INR.
Related papers
- Radiative Gaussian Splatting for Efficient X-ray Novel View Synthesis [88.86777314004044]
We propose a 3D Gaussian splatting-based framework, namely X-Gaussian, for X-ray novel view visualization.
Experiments show that our X-Gaussian outperforms state-of-the-art methods by 6.5 dB while enjoying less than 15% training time and over 73x inference speed.
arXiv Detail & Related papers (2024-03-07T00:12:08Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - Single-subject Multi-contrast MRI Super-resolution via Implicit Neural
Representations [9.683341998041634]
Implicit Neural Representations (INR) proposed to learn two different contrasts of complementary views in a continuous spatial function.
Our model provides realistic super-resolution across different pairs of contrasts in our experiments with three datasets.
arXiv Detail & Related papers (2023-03-27T10:18:42Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Sketch guided and progressive growing GAN for realistic and editable
ultrasound image synthesis [12.32829386817706]
We propose a generative adversarial network (GAN) based image synthesis framework.
We present the first work that can synthesize realistic B-mode US images with high-resolution and customized texture editing features.
In addition, a feature loss is proposed to minimize the difference of high-level features between the generated and real images.
arXiv Detail & Related papers (2022-04-14T12:50:18Z) - Image-Guided Navigation of a Robotic Ultrasound Probe for Autonomous
Spinal Sonography Using a Shadow-aware Dual-Agent Framework [35.17207004351791]
We propose a novel dual-agent framework that integrates a reinforcement learning agent and a deep learning agent.
Our method can effectively interpret the US images and navigate the probe to acquire multiple standard views of the spine.
arXiv Detail & Related papers (2021-11-03T12:11:27Z) - Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis [20.53251934808636]
Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening.
In this paper we propose to generate MR-like images directly from clinical US images.
The proposed model is end-to-end trainable and self-supervised without any external annotations.
arXiv Detail & Related papers (2020-08-19T22:56:36Z) - Screen Tracking for Clinical Translation of Live Ultrasound Image
Analysis Methods [2.5805793749729857]
The proposed method captures the US image by tracking the screen with a camera fixed at the sonographer's view point and reformats the captured image to the right aspect ratio.
It is hypothesized that this would enable to input such retrieved image into an image processing pipeline to extract information that can help improve the examination.
This information could eventually be projected back to the sonographer's field of view in real time using, for example, an augmented reality (AR) headset.
arXiv Detail & Related papers (2020-07-13T09:53:20Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.