High-Quality RGB-D Reconstruction via Multi-View Uncalibrated
Photometric Stereo and Gradient-SDF
- URL: http://arxiv.org/abs/2210.12202v1
- Date: Fri, 21 Oct 2022 19:09:08 GMT
- Title: High-Quality RGB-D Reconstruction via Multi-View Uncalibrated
Photometric Stereo and Gradient-SDF
- Authors: Lu Sang and Bjoern Haefner and Xingxing Zuo and Daniel Cremers
- Abstract summary: We present a novel multi-view RGB-D based reconstruction method that tackles camera pose, lighting, albedo, and surface normal estimation.
The proposed method formulates the image rendering process using specific physically-based model(s) and optimize the surface's volumetric quantities on the actual surface.
- Score: 48.29050063823478
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Fine-detailed reconstructions are in high demand in many applications.
However, most of the existing RGB-D reconstruction methods rely on
pre-calculated accurate camera poses to recover the detailed surface geometry,
where the representation of a surface needs to be adapted when optimizing
different quantities. In this paper, we present a novel multi-view RGB-D based
reconstruction method that tackles camera pose, lighting, albedo, and surface
normal estimation via the utilization of a gradient signed distance field
(gradient-SDF). The proposed method formulates the image rendering process
using specific physically-based model(s) and optimizes the surface's quantities
on the actual surface using its volumetric representation, as opposed to other
works which estimate surface quantities only near the actual surface. To
validate our method, we investigate two physically-based image formation models
for natural light and point light source applications. The experimental results
on synthetic and real-world datasets demonstrate that the proposed method can
recover high-quality geometry of the surface more faithfully than the
state-of-the-art and further improves the accuracy of estimated camera poses.
Related papers
- NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image [40.03212588672639]
ANIM is a novel method that reconstructs arbitrary 3D human shapes from single-view RGB-D images with an unprecedented level of accuracy.
Our model learns geometric details from both pixel-aligned and voxel-aligned features to leverage depth information.
Experiments demonstrate that ANIM outperforms state-of-the-art works that use RGB, surface normals, point cloud or RGB-D data as input.
arXiv Detail & Related papers (2024-03-15T14:45:38Z) - NeuSD: Surface Completion with Multi-View Text-to-Image Diffusion [56.98287481620215]
We present a novel method for 3D surface reconstruction from multiple images where only a part of the object of interest is captured.
Our approach builds on two recent developments: surface reconstruction using neural radiance fields for the reconstruction of the visible parts of the surface, and guidance of pre-trained 2D diffusion models in the form of Score Distillation Sampling (SDS) to complete the shape in unobserved regions in a plausible manner.
arXiv Detail & Related papers (2023-12-07T19:30:55Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - Multi-View Neural Surface Reconstruction with Structured Light [7.709526244898887]
Three-dimensional (3D) object reconstruction based on differentiable rendering (DR) is an active research topic in computer vision.
We introduce active sensing with structured light (SL) into multi-view 3D object reconstruction based on DR to learn the unknown geometry and appearance of arbitrary scenes and camera poses.
Our method realizes high reconstruction accuracy in the textureless region and reduces efforts for camera pose calibration.
arXiv Detail & Related papers (2022-11-22T03:10:46Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Learning Signed Distance Field for Multi-view Surface Reconstruction [24.090786783370195]
We introduce a novel neural surface reconstruction framework that leverages the knowledge of stereo matching and feature consistency.
We apply a signed distance field (SDF) and a surface light field to represent the scene geometry and appearance respectively.
Our method is able to improve the robustness of geometry estimation and support reconstruction of complex scene topologies.
arXiv Detail & Related papers (2021-08-23T06:23:50Z) - SIDER: Single-Image Neural Optimization for Facial Geometric Detail
Recovery [54.64663713249079]
SIDER is a novel photometric optimization method that recovers detailed facial geometry from a single image in an unsupervised manner.
In contrast to prior work, SIDER does not rely on any dataset priors and does not require additional supervision from multiple views, lighting changes or ground truth 3D shape.
arXiv Detail & Related papers (2021-08-11T22:34:53Z) - Photometric Multi-View Mesh Refinement for High-Resolution Satellite
Images [24.245977127434212]
State-of-the-art reconstruction methods typically generate 2.5D elevation data.
We present an approach to recover full 3D surface meshes from multi-view satellite imagery.
arXiv Detail & Related papers (2020-05-10T20:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.