ActiveNeuS: Neural Signed Distance Fields for Active Stereo
- URL: http://arxiv.org/abs/2410.15376v1
- Date: Sun, 20 Oct 2024 12:20:47 GMT
- Title: ActiveNeuS: Neural Signed Distance Fields for Active Stereo
- Authors: Kazuto Ichimaru, Takaki Ikeda, Diego Thomas, Takafumi Iwaguchi, Hiroshi Kawasaki,
- Abstract summary: We propose Neural Signed Distance Field for active stereo systems to enable implicit correspondence search and triangulation in generalized Structured Light.
With our technique, textureless or equivalent surfaces by low light condition are successfully reconstructed even with a small number of captured images.
Experiments were conducted to confirm that the proposed method could achieve state-of-the-art reconstruction quality under such severe condition.
- Score: 1.2470540422791463
- License:
- Abstract: 3D-shape reconstruction in extreme environments, such as low illumination or scattering condition, has been an open problem and intensively researched. Active stereo is one of potential solution for such environments for its robustness and high accuracy. However, active stereo systems usually consist of specialized system configurations with complicated algorithms, which narrow their application. In this paper, we propose Neural Signed Distance Field for active stereo systems to enable implicit correspondence search and triangulation in generalized Structured Light. With our technique, textureless or equivalent surfaces by low light condition are successfully reconstructed even with a small number of captured images. Experiments were conducted to confirm that the proposed method could achieve state-of-the-art reconstruction quality under such severe condition. We also demonstrated that the proposed method worked in an underwater scenario.
Related papers
- Stereo-Depth Fusion through Virtual Pattern Projection [37.519762078762575]
This paper presents a novel general-purpose stereo and depth data fusion paradigm.
It mimics the active stereo principle by replacing the unreliable physical pattern projector with a depth sensor.
It works by projecting virtual patterns consistent with the scene geometry onto the left and right images acquired by a conventional stereo camera.
arXiv Detail & Related papers (2024-06-06T17:59:58Z) - Active Stereo Without Pattern Projector [40.25292494550211]
This paper proposes a novel framework integrating the principles of active stereo in standard passive camera systems without a physical pattern projector.
We virtually project a pattern over the left and right images according to the sparse measurements obtained from a depth sensor.
arXiv Detail & Related papers (2023-09-21T17:59:56Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Self-Supervised Depth Completion for Active Stereo [55.79929735390945]
Active stereo systems are widely used in the robotics industry due to their low cost and high quality depth maps.
These depth sensors suffer from stereo artefacts and do not provide dense depth estimates.
We present the first self-supervised depth completion method for active stereo systems that predicts accurate dense depth maps.
arXiv Detail & Related papers (2021-10-07T07:33:52Z) - ResDepth: A Deep Prior For 3D Reconstruction From High-resolution
Satellite Images [28.975837416508142]
We introduce ResDepth, a convolutional neural network that learns such an expressive geometric prior from example data.
In a series of experiments, we find that the proposed method consistently improves stereo DSMs both quantitatively and qualitatively.
We show that the prior encoded in the network weights captures meaningful geometric characteristics of urban design.
arXiv Detail & Related papers (2021-06-15T12:51:28Z) - Uncalibrated Neural Inverse Rendering for Photometric Stereo of General
Surfaces [103.08512487830669]
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem.
Existing neural network-based methods either require exact light directions or ground-truth surface normals of the object or both.
We propose an uncalibrated neural inverse rendering approach to this problem.
arXiv Detail & Related papers (2020-12-12T10:33:08Z) - Polka Lines: Learning Structured Illumination and Reconstruction for
Active Stereo [52.68109922159688]
We introduce a novel differentiable image formation model for active stereo, relying on both wave and geometric optics, and a novel trinocular reconstruction network.
The jointly optimized pattern, which we dub "Polka Lines," together with the reconstruction network, achieve state-of-the-art active-stereo depth estimates across imaging conditions.
arXiv Detail & Related papers (2020-11-26T04:02:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.