Optical Diffraction Tomography based on 3D Physics-Inspired Neural
Network (PINN)
- URL: http://arxiv.org/abs/2206.05236v1
- Date: Fri, 10 Jun 2022 17:19:04 GMT
- Title: Optical Diffraction Tomography based on 3D Physics-Inspired Neural
Network (PINN)
- Authors: Ahmed B. Ayoub, Amirhossein Saba, Carlo Gigli, Demetri Psaltis
- Abstract summary: Optical diffraction tomography (ODT) is an emerging 3D imaging technique that is used for the 3D reconstruction of the refractive index (RI) for semi-transparent samples.
Various inverse models have been proposed to reconstruct the 3D RI based on the holographic detection of different samples such as the Born and the Rytov approximations.
We propose a different approach where a 3D neural network (NN) is employed. The NN is trained with a cost function derived from a physical model based on the physics of optical wave propagation.
- Score: 0.1310865248866973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical diffraction tomography (ODT) is an emerging 3D imaging technique that
is used for the 3D reconstruction of the refractive index (RI) for
semi-transparent samples. Various inverse models have been proposed to
reconstruct the 3D RI based on the holographic detection of different samples
such as the Born and the Rytov approximations. However, such approximations
usually suffer from the so-called missing-cone problem that results in an
elongation of the final reconstruction along the optical axis. Different
iterative schemes have been proposed to solve the missing cone problem relying
on physical forward models and an error function that aims at filling in the
k-space and thus eliminating the missing-cone problem and reaching better
reconstruction accuracy. In this paper, we propose a different approach where a
3D neural network (NN) is employed. The NN is trained with a cost function
derived from a physical model based on the physics of optical wave propagation.
The 3D NN starts with an initial guess for the 3D RI reconstruction (i.e. Born,
or Rytov) and aims at reconstructing better 3D reconstruction based on an error
function. With this technique, the NN can be trained without any examples of
the relation between the ill-posed reconstruction (Born or Rytov) and the
ground truth (true shape).
Related papers
- NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - Uncertainty Guided Policy for Active Robotic 3D Reconstruction using
Neural Radiance Fields [82.21033337949757]
This paper introduces a ray-based volumetric uncertainty estimator, which computes the entropy of the weight distribution of the color samples along each ray of the object's implicit neural representation.
We show that it is possible to infer the uncertainty of the underlying 3D geometry given a novel view with the proposed estimator.
We present a next-best-view selection policy guided by the ray-based volumetric uncertainty in neural radiance fields-based representations.
arXiv Detail & Related papers (2022-09-17T21:28:57Z) - NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction [64.36535692191343]
Implicit neural representations have shown compelling results in offline 3D reconstruction and also recently demonstrated the potential for online SLAM systems.
This paper addresses two key challenges: 1) seeking a criterion to measure the quality of the candidate viewpoints for the view planning based on the new representations, and 2) learning the criterion from data that can generalize to different scenes instead of hand-crafting one.
Our method demonstrates significant improvements on various metrics for the rendered image quality and the geometry quality of the reconstructed 3D models when compared with variants using TSDF or reconstruction without view planning.
arXiv Detail & Related papers (2022-07-22T10:05:36Z) - DH-GAN: A Physics-driven Untrained Generative Adversarial Network for 3D
Microscopic Imaging using Digital Holography [3.4635026053111484]
Digital holography is a 3D imaging technique by emitting a laser beam with a plane wavefront to an object and measuring the intensity of the diffracted waveform, called holograms.
Recently, deep learning (DL) methods have been used for more accurate holographic processing.
We propose a new DL architecture based on generative adversarial networks that uses a discriminative network for realizing a semantic measure for reconstruction quality.
arXiv Detail & Related papers (2022-05-25T17:13:45Z) - Advantage of Machine Learning over Maximum Likelihood in Limited-Angle
Low-Photon X-Ray Tomography [0.0]
We introduce deep neural networks to determine and apply a prior distribution in the reconstruction process.
Our neural networks learn the prior directly from synthetic training samples.
We demonstrate that, when the projection angles and photon budgets are limited, the priors from our deep generative models can dramatically improve the IC reconstruction quality.
arXiv Detail & Related papers (2021-11-15T16:24:12Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - Real-time 3D Nanoscale Coherent Imaging via Physics-aware Deep Learning [0.7664249650622356]
We introduce 3D-CDI-NN, a deep convolutional neural network and differential programming framework trained to predict 3D structure and strain.
Our networks are designed to be "physics-aware" in multiple aspects.
Our integrated machine learning and differential programming solution is broadly applicable across inverse problems in other application areas.
arXiv Detail & Related papers (2020-06-16T18:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.