Omni-Line-of-Sight Imaging for Holistic Shape Reconstruction
- URL: http://arxiv.org/abs/2304.10780v1
- Date: Fri, 21 Apr 2023 07:12:41 GMT
- Title: Omni-Line-of-Sight Imaging for Holistic Shape Reconstruction
- Authors: Binbin Huang, Xingyue Peng, Siyuan Shen, Suan Xia, Ruiqian Li, Yanhua
Yu, Yuehan Wang, Shenghua Gao, Wenzheng Chen, Shiying Li, Jingyi Yu
- Abstract summary: We introduce Omni-LOS, a neural computational imaging method for conducting holistic shape reconstruction (HSR) of complex objects.
Our method enables new capabilities to reconstruct near-$360circ$ surrounding geometry of an object from a single scan spot.
- Score: 45.955894009809185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Omni-LOS, a neural computational imaging method for conducting
holistic shape reconstruction (HSR) of complex objects utilizing a
Single-Photon Avalanche Diode (SPAD)-based time-of-flight sensor. As
illustrated in Fig. 1, our method enables new capabilities to reconstruct
near-$360^\circ$ surrounding geometry of an object from a single scan spot. In
such a scenario, traditional line-of-sight (LOS) imaging methods only see the
front part of the object and typically fail to recover the occluded back
regions. Inspired by recent advances of non-line-of-sight (NLOS) imaging
techniques which have demonstrated great power to reconstruct occluded objects,
Omni-LOS marries LOS and NLOS together, leveraging their complementary
advantages to jointly recover the holistic shape of the object from a single
scan position. The core of our method is to put the object nearby diffuse walls
and augment the LOS scan in the front view with the NLOS scans from the
surrounding walls, which serve as virtual ``mirrors'' to trap lights toward the
object. Instead of separately recovering the LOS and NLOS signals, we adopt an
implicit neural network to represent the object, analogous to NeRF and NeTF.
While transients are measured along straight rays in LOS but over the spherical
wavefronts in NLOS, we derive differentiable ray propagation models to
simultaneously model both types of transient measurements so that the NLOS
reconstruction also takes into account the direct LOS measurements and vice
versa. We further develop a proof-of-concept Omni-LOS hardware prototype for
real-world validation. Comprehensive experiments on various wall settings
demonstrate that Omni-LOS successfully resolves shape ambiguities caused by
occlusions, achieves high-fidelity 3D scan quality, and manages to recover
objects of various scales and complexity.
Related papers
- Towards 3D Vision with Low-Cost Single-Photon Cameras [24.711165102559438]
We present a method for reconstructing 3D shape of arbitrary Lambertian objects based on measurements by miniature, energy-efficient, low-cost single-photon cameras.
Our work draws a connection between image-based modeling and active range scanning and is a step towards 3D vision with single-photon cameras.
arXiv Detail & Related papers (2024-03-26T15:40:05Z) - Learning Photometric Feature Transform for Free-form Object Scan [44.21941847090453]
We propose a novel framework to automatically learn to aggregate and transform photometric measurements from unstructured views.
We build a system to reconstruct both the geometry and anisotropic reflectance of a variety of challenging objects from hand-held scans.
Results are validated against reconstructions from a professional 3D scanner and photographs, and compare favorably with state-of-the-art techniques.
arXiv Detail & Related papers (2023-08-07T11:34:27Z) - Towards Scalable Multi-View Reconstruction of Geometry and Materials [27.660389147094715]
We propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes.
The input are high-resolution RGBD images captured by a mobile, hand-held capture system with point lights for active illumination.
arXiv Detail & Related papers (2023-06-06T15:07:39Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Real-time Non-line-of-sight Imaging with Two-step Deep Remapping [0.0]
Non-line-of-sight (NLOS) imaging takes the indirect light into account.
Most solutions employ a transient scanning process, followed by a back-projection based algorithm to reconstruct the NLOS scenes.
Here we propose a new NLOS solution to address the above defects, with innovations on both detection equipment and reconstruction algorithm.
arXiv Detail & Related papers (2021-01-26T00:08:54Z) - Polka Lines: Learning Structured Illumination and Reconstruction for
Active Stereo [52.68109922159688]
We introduce a novel differentiable image formation model for active stereo, relying on both wave and geometric optics, and a novel trinocular reconstruction network.
The jointly optimized pattern, which we dub "Polka Lines," together with the reconstruction network, achieve state-of-the-art active-stereo depth estimates across imaging conditions.
arXiv Detail & Related papers (2020-11-26T04:02:43Z) - Efficient Non-Line-of-Sight Imaging from Transient Sinograms [36.154873075911404]
Non-line-of-sight (NLOS) imaging techniques use light that diffusely reflects off of visible surfaces (e.g., walls) to see around corners.
One approach involves using pulsed lasers and ultrafast sensors to measure the travel time of multiply scattered light.
We propose a more efficient form of NLOS scanning that reduces both acquisition times and computational requirements.
arXiv Detail & Related papers (2020-08-06T17:50:50Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z) - Single View Metrology in the Wild [94.7005246862618]
We present a novel approach to single view metrology that can recover the absolute scale of a scene represented by 3D heights of objects or camera height above the ground.
Our method relies on data-driven priors learned by a deep network specifically designed to imbibe weakly supervised constraints from the interplay of the unknown camera with 3D entities such as object heights.
We demonstrate state-of-the-art qualitative and quantitative results on several datasets as well as applications including virtual object insertion.
arXiv Detail & Related papers (2020-07-18T22:31:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.