Single View Refractive Index Tomography with Neural Fields
- URL: http://arxiv.org/abs/2309.04437v2
- Date: Fri, 1 Dec 2023 21:33:13 GMT
- Title: Single View Refractive Index Tomography with Neural Fields
- Authors: Brandon Zhao, Aviad Levis, Liam Connor, Pratul P. Srinivasan,
Katherine L. Bouman
- Abstract summary: We introduce a method that leverages prior knowledge of light sources scattered throughout the refractive medium to help disambiguate the single-view refractive index tomography problem.
We demonstrate the efficacy of our approach by reconstructing simulated refractive fields, analyze the effects of light source distribution on the recovered field, and test our method on a simulated dark matter mapping problem.
- Score: 16.578244661163513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Refractive Index Tomography is the inverse problem of reconstructing the
continuously-varying 3D refractive index in a scene using 2D projected image
measurements. Although a purely refractive field is not directly visible, it
bends light rays as they travel through space, thus providing a signal for
reconstruction. The effects of such fields appear in many scientific computer
vision settings, ranging from refraction due to transparent cells in microscopy
to the lensing of distant galaxies caused by dark matter in astrophysics.
Reconstructing these fields is particularly difficult due to the complex
nonlinear effects of the refractive field on observed images. Furthermore,
while standard 3D reconstruction and tomography settings typically have access
to observations of the scene from many viewpoints, many refractive index
tomography problem settings only have access to images observed from a single
viewpoint. We introduce a method that leverages prior knowledge of light
sources scattered throughout the refractive medium to help disambiguate the
single-view refractive index tomography problem. We differentiably trace curved
rays through a neural field representation of the refractive field, and
optimize its parameters to best reproduce the observed image. We demonstrate
the efficacy of our approach by reconstructing simulated refractive fields,
analyze the effects of light source distribution on the recovered field, and
test our method on a simulated dark matter mapping problem where we
successfully recover the 3D refractive field caused by a realistic dark matter
distribution.
Related papers
- NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - NeISF: Neural Incident Stokes Field for Geometry and Material Estimation [50.588983686271284]
Multi-view inverse rendering is the problem of estimating the scene parameters such as shapes, materials, or illuminations from a sequence of images captured under different viewpoints.
We propose Neural Incident Stokes Fields (NeISF), a multi-view inverse framework that reduces ambiguities using polarization cues.
arXiv Detail & Related papers (2023-11-22T06:28:30Z) - Non-line-of-sight imaging in the presence of scattering media using
phasor fields [0.7999703756441756]
Non-line-of-sight (NLOS) imaging aims to reconstruct partially or completely occluded scenes.
We investigate current state-of-the-art NLOS imaging methods based on phasor fields to reconstruct scenes submerged in scattering media.
arXiv Detail & Related papers (2023-08-25T13:05:36Z) - Towards Monocular Shape from Refraction [23.60349429048409]
We show that a simple energy function based on Snell's law enables the reconstruction of an arbitrary refractive surface geometry.
We show that solving for an entire surface at once introduces implicit parameter-free spatial regularization.
arXiv Detail & Related papers (2023-05-31T11:09:37Z) - Sampling Neural Radiance Fields for Refractive Objects [8.539183778516795]
In this work, the scene is instead a heterogeneous volume with a piecewise-constant refractive index, where the path will be curved if it intersects the different refractive indices.
For novel view synthesis of refractive objects, our NeRF-based framework aims to optimize the radiance fields of bounded volume and boundary from multi-view posed images with refractive object silhouettes.
Given the refractive index, we extend the stratified and hierarchical sampling techniques in NeRF to allow drawing samples along a curved path tracked by the Eikonal equation.
arXiv Detail & Related papers (2022-11-27T11:43:21Z) - Edge-preserving Near-light Photometric Stereo with Neural Surfaces [76.50065919656575]
We introduce an analytically differentiable neural surface in near-light photometric stereo for avoiding differentiation errors at sharp depth edges.
Experiments on both synthetic and real-world scenes demonstrate the effectiveness of our method for detailed shape recovery with edge preservation.
arXiv Detail & Related papers (2022-07-11T04:51:43Z) - Solving Inverse Problems with NerfGANs [88.24518907451868]
We introduce a novel framework for solving inverse problems using NeRF-style generative models.
We show that naively optimizing the latent space leads to artifacts and poor novel view rendering.
We propose a novel radiance field regularization method to obtain better 3-D surfaces and improved novel views given single view observations.
arXiv Detail & Related papers (2021-12-16T17:56:58Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - Unsupervised Missing Cone Deep Learning in Optical Diffraction
Tomography [25.18730153421617]
We present a novel unsupervised deep learning framework, which learns the probability distribution of missing projection views through optimal transport driven cycleGAN.
Experimental results show that missing cone artifact in ODT can be significantly resolved by the proposed method.
arXiv Detail & Related papers (2021-03-16T12:41:33Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.