Amortized Normalizing Flows for Transcranial Ultrasound with Uncertainty
Quantification
- URL: http://arxiv.org/abs/2303.03478v1
- Date: Mon, 6 Mar 2023 20:13:41 GMT
- Title: Amortized Normalizing Flows for Transcranial Ultrasound with Uncertainty
Quantification
- Authors: Rafael Orozco, Mathias Louboutin, Ali Siahkoohi, Gabrio Rizzuti,
Tristan van Leeuwen and Felix Herrmann
- Abstract summary: We present a novel approach to transcranial ultrasound computed tomography that utilizes normalizing flows to improve the speed of imaging.
We make use of a physics-informed summary statistic to incorporate the known ultrasound physics with the goal of compressing large incoming observations.
- Score: 1.1744028458220426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel approach to transcranial ultrasound computed tomography
that utilizes normalizing flows to improve the speed of imaging and provide
Bayesian uncertainty quantification. Our method combines physics-informed
methods and data-driven methods to accelerate the reconstruction of the final
image. We make use of a physics-informed summary statistic to incorporate the
known ultrasound physics with the goal of compressing large incoming
observations. This compression enables efficient training of the normalizing
flow and standardizes the size of the data regardless of imaging
configurations. The combinations of these methods results in fast
uncertainty-aware image reconstruction that generalizes to a variety of
transducer configurations. We evaluate our approach with in silico experiments
and demonstrate that it can significantly improve the imaging speed while
quantifying uncertainty. We validate the quality of our image reconstructions
by comparing against the traditional physics-only method and also verify that
our provided uncertainty is calibrated with the error.
Related papers
- PHOCUS: Physics-Based Deconvolution for Ultrasound Resolution Enhancement [36.20701982473809]
The impulse function of an ultrasound imaging system is called the point spread function (PSF), which is convolved with the spatial distribution of reflectors in the image formation process.
We introduce a physics-based deconvolution process using a modeled PSF, working directly on the more commonly available B-mode images.
By leveraging Implicit Neural Representations (INRs), we learn a continuous mapping from spatial locations to their respective echogenicity values, effectively compensating for the discretized image space.
arXiv Detail & Related papers (2024-08-07T09:52:30Z) - ReNoise: Real Image Inversion Through Iterative Noising [62.96073631599749]
We introduce an inversion method with a high quality-to-operation ratio, enhancing reconstruction accuracy without increasing the number of operations.
We evaluate the performance of our ReNoise technique using various sampling algorithms and models, including recent accelerated diffusion models.
arXiv Detail & Related papers (2024-03-21T17:52:08Z) - Equivariant Bootstrapping for Uncertainty Quantification in Imaging
Inverse Problems [0.24475591916185502]
We present a new uncertainty quantification methodology based on an equivariant formulation of the parametric bootstrap algorithm.
The proposed methodology is general and can be easily applied with any image reconstruction technique.
We demonstrate the proposed approach with a series of numerical experiments and through comparisons with alternative uncertainty quantification strategies.
arXiv Detail & Related papers (2023-10-18T09:43:15Z) - Compressive Ptychography using Deep Image and Generative Priors [9.658250977094562]
Ptychography is a well-established coherent diffraction imaging technique that enables non-invasive imaging of samples at a nanometer scale.
One major limitation of ptychography is the long data acquisition time due to mechanical scanning of the sample.
We propose a generative model combining deep image priors with deep generative priors.
arXiv Detail & Related papers (2022-05-05T02:18:26Z) - Image-to-Image Regression with Distribution-Free Uncertainty
Quantification and Applications in Imaging [88.20869695803631]
We show how to derive uncertainty intervals around each pixel that are guaranteed to contain the true value.
We evaluate our procedure on three image-to-image regression tasks.
arXiv Detail & Related papers (2022-02-10T18:59:56Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Deep Unfolded Recovery of Sub-Nyquist Sampled Ultrasound Image [94.42139459221784]
We propose a reconstruction method from sub-Nyquist samples in the time and spatial domain, that is based on unfolding the ISTA algorithm.
Our method allows reducing the number of array elements, sampling rate, and computational time while ensuring high quality imaging performance.
arXiv Detail & Related papers (2021-03-01T19:19:38Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - Quantifying Sources of Uncertainty in Deep Learning-Based Image
Reconstruction [5.129343375966527]
We propose a scalable and efficient framework to simultaneously quantify aleatoric and epistemic uncertainties in learned iterative image reconstruction.
We show that our method exhibits competitive performance against conventional benchmarks for computed tomography with both sparse view and limited angle data.
arXiv Detail & Related papers (2020-11-17T04:12:52Z) - Training Variational Networks with Multi-Domain Simulations:
Speed-of-Sound Image Reconstruction [5.47832435255656]
Variational Networks (VN) have been shown to be a potential learning-based approach for optimizing inverse problems in image reconstruction.
We present for the first time a VN solution for a pulse-echo SoS image reconstruction problem using waves with conventional transducers and single-sided tissue access.
We show that the proposed regularization techniques combined with multi-source domain training yield substantial improvements in the domain adaptation capabilities of VN.
arXiv Detail & Related papers (2020-06-25T13:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.