AutoPhaseNN: Unsupervised Physics-aware Deep Learning of 3D Nanoscale
Coherent Imaging
- URL: http://arxiv.org/abs/2109.14053v1
- Date: Tue, 28 Sep 2021 21:16:34 GMT
- Title: AutoPhaseNN: Unsupervised Physics-aware Deep Learning of 3D Nanoscale
Coherent Imaging
- Authors: Yudong Yao, Henry Chan, Subramanian Sankaranarayanan, Prasanna
Balaprakash, Ross J. Harder, and Mathew J. Cherukara
- Abstract summary: The problem of phase retrieval underlies various imaging methods from astronomy to nanoscale imaging.
Traditional methods of phase retrieval are iterative in nature, and are therefore computationally expensive and time consuming.
DL models provide learned prior iterative phase retrieval or replace phase information from measured intensity alone.
- Score: 5.745058078090997
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The problem of phase retrieval, or the algorithmic recovery of lost phase
information from measured intensity alone, underlies various imaging methods
from astronomy to nanoscale imaging. Traditional methods of phase retrieval are
iterative in nature, and are therefore computationally expensive and time
consuming. More recently, deep learning (DL) models have been developed to
either provide learned priors to iterative phase retrieval or in some cases
completely replace phase retrieval with networks that learn to recover the lost
phase information from measured intensity alone. However, such models require
vast amounts of labeled data, which can only be obtained through simulation or
performing computationally prohibitive phase retrieval on hundreds of or even
thousands of experimental datasets. Using a 3D nanoscale X-ray imaging modality
(Bragg Coherent Diffraction Imaging or BCDI) as a representative technique, we
demonstrate AutoPhaseNN, a DL-based approach which learns to solve the phase
problem without labeled data. By incorporating the physics of the imaging
technique into the DL model during training, AutoPhaseNN learns to invert 3D
BCDI data from reciprocal space to real space in a single shot without ever
being shown real space images. Once trained, AutoPhaseNN is about one hundred
times faster than traditional iterative phase retrieval methods while providing
comparable image quality.
Related papers
- Artifact Reduction in 3D and 4D Cone-beam Computed Tomography Images with Deep Learning -- A Review [0.0]
Deep learning techniques have been used to improve image quality in cone-beam computed tomography (CBCT)
We provide an overview of deep learning techniques that have successfully been shown to reduce artifacts in 3D, as well as in time-resolved (4D) CBCT.
One of the key findings of this work is an observed trend towards the use of generative models including GANs and score-based or diffusion models.
arXiv Detail & Related papers (2024-03-27T13:46:01Z) - Efficient Physics-Based Learned Reconstruction Methods for Real-Time 3D
Near-Field MIMO Radar Imaging [0.0]
Near-field multiple-input multiple-output (MIMO) radar imaging systems have recently gained significant attention.
In this paper, we develop novel non-iterative deep learning-based reconstruction methods for real-time near-field imaging.
The goal is to achieve high image quality with low computational cost at settings.
arXiv Detail & Related papers (2023-12-28T11:05:36Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - DH-GAN: A Physics-driven Untrained Generative Adversarial Network for 3D
Microscopic Imaging using Digital Holography [3.4635026053111484]
Digital holography is a 3D imaging technique by emitting a laser beam with a plane wavefront to an object and measuring the intensity of the diffracted waveform, called holograms.
Recently, deep learning (DL) methods have been used for more accurate holographic processing.
We propose a new DL architecture based on generative adversarial networks that uses a discriminative network for realizing a semantic measure for reconstruction quality.
arXiv Detail & Related papers (2022-05-25T17:13:45Z) - Advantage of Machine Learning over Maximum Likelihood in Limited-Angle
Low-Photon X-Ray Tomography [0.0]
We introduce deep neural networks to determine and apply a prior distribution in the reconstruction process.
Our neural networks learn the prior directly from synthetic training samples.
We demonstrate that, when the projection angles and photon budgets are limited, the priors from our deep generative models can dramatically improve the IC reconstruction quality.
arXiv Detail & Related papers (2021-11-15T16:24:12Z) - Learning a Model-Driven Variational Network for Deformable Image
Registration [89.9830129923847]
VR-Net is a novel cascaded variational network for unsupervised deformable image registration.
It outperforms state-of-the-art deep learning methods on registration accuracy.
It maintains the fast inference speed of deep learning and the data-efficiency of variational model.
arXiv Detail & Related papers (2021-05-25T21:37:37Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z) - 4D Spatio-Temporal Convolutional Networks for Object Position Estimation
in OCT Volumes [69.62333053044712]
3D convolutional neural networks (CNNs) have shown promising performance for pose estimation of a marker object using single OCT images.
We extend 3D CNNs to 4D-temporal CNNs to evaluate the impact of additional temporal information for marker object tracking.
arXiv Detail & Related papers (2020-07-02T12:02:20Z) - Real-time 3D Nanoscale Coherent Imaging via Physics-aware Deep Learning [0.7664249650622356]
We introduce 3D-CDI-NN, a deep convolutional neural network and differential programming framework trained to predict 3D structure and strain.
Our networks are designed to be "physics-aware" in multiple aspects.
Our integrated machine learning and differential programming solution is broadly applicable across inverse problems in other application areas.
arXiv Detail & Related papers (2020-06-16T18:35:32Z) - Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images [70.13735569016752]
We investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases.
We propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning.
The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively.
arXiv Detail & Related papers (2020-05-20T13:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.