Adaptive Joint Optimization for 3D Reconstruction with Differentiable
Rendering
- URL: http://arxiv.org/abs/2208.07003v1
- Date: Mon, 15 Aug 2022 04:32:41 GMT
- Title: Adaptive Joint Optimization for 3D Reconstruction with Differentiable
Rendering
- Authors: Jingbo Zhang, Ziyu Wan, Jing Liao
- Abstract summary: Given an imperfect reconstructed 3D model, most previous methods have focused on the refinement of either geometry, texture, or camera pose.
We propose a novel optimization approach based on differentiable rendering, which integrates the optimization of camera pose, geometry, and texture into a unified framework.
Using differentiable rendering, an image-level adversarial loss is applied to further improve the 3D model, making it more photorealistic.
- Score: 22.2095090385119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to inevitable noises introduced during scanning and quantization, 3D
reconstruction via RGB-D sensors suffers from errors both in geometry and
texture, leading to artifacts such as camera drifting, mesh distortion, texture
ghosting, and blurriness. Given an imperfect reconstructed 3D model, most
previous methods have focused on the refinement of either geometry, texture, or
camera pose. Or different optimization schemes and objectives for optimizing
each component have been used in previous joint optimization methods, forming a
complicated system. In this paper, we propose a novel optimization approach
based on differentiable rendering, which integrates the optimization of camera
pose, geometry, and texture into a unified framework by enforcing consistency
between the rendered results and the corresponding RGB-D inputs. Based on the
unified framework, we introduce a joint optimization approach to fully exploit
the inter-relationships between geometry, texture, and camera pose, and
describe an adaptive interleaving strategy to improve optimization stability
and efficiency. Using differentiable rendering, an image-level adversarial loss
is applied to further improve the 3D model, making it more photorealistic.
Experiments on synthetic and real data using quantitative and qualitative
evaluation demonstrated the superiority of our approach in recovering both
fine-scale geometry and high-fidelity texture.
Related papers
- Visual SLAM with 3D Gaussian Primitives and Depth Priors Enabling Novel View Synthesis [11.236094544193605]
Conventional geometry-based SLAM systems lack dense 3D reconstruction capabilities.
We propose a real-time RGB-D SLAM system that incorporates a novel view synthesis technique, 3D Gaussian Splatting.
arXiv Detail & Related papers (2024-08-10T21:23:08Z) - Improving Robustness for Joint Optimization of Camera Poses and
Decomposed Low-Rank Tensorial Radiance Fields [26.4340697184666]
We propose an algorithm that allows joint refinement of camera pose and scene geometry represented by decomposed low-rank tensor.
We also propose techniques of smoothed 2D supervision, randomly scaled kernel parameters, and edge-guided loss mask.
arXiv Detail & Related papers (2024-02-20T18:59:02Z) - Wonder3D: Single Image to 3D using Cross-Domain Diffusion [105.16622018766236]
Wonder3D is a novel method for efficiently generating high-fidelity textured meshes from single-view images.
To holistically improve the quality, consistency, and efficiency of image-to-3D tasks, we propose a cross-domain diffusion model.
arXiv Detail & Related papers (2023-10-23T15:02:23Z) - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - Neural Lens Modeling [50.57409162437732]
NeuroLens is a neural lens model for distortion and vignetting that can be used for point projection and ray casting.
It can be used to perform pre-capture calibration using classical calibration targets, and can later be used to perform calibration or refinement during 3D reconstruction.
The model generalizes across many lens types and is trivial to integrate into existing 3D reconstruction and rendering systems.
arXiv Detail & Related papers (2023-04-10T20:09:17Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - Multi-View Neural Surface Reconstruction with Structured Light [7.709526244898887]
Three-dimensional (3D) object reconstruction based on differentiable rendering (DR) is an active research topic in computer vision.
We introduce active sensing with structured light (SL) into multi-view 3D object reconstruction based on DR to learn the unknown geometry and appearance of arbitrary scenes and camera poses.
Our method realizes high reconstruction accuracy in the textureless region and reduces efforts for camera pose calibration.
arXiv Detail & Related papers (2022-11-22T03:10:46Z) - Differentiable Rendering with Perturbed Optimizers [85.66675707599782]
Reasoning about 3D scenes from their 2D image projections is one of the core problems in computer vision.
Our work highlights the link between some well-known differentiable formulations and randomly smoothed renderings.
We apply our method to 3D scene reconstruction and demonstrate its advantages on the tasks of 6D pose estimation and 3D mesh reconstruction.
arXiv Detail & Related papers (2021-10-18T08:56:23Z) - Unified Shape and SVBRDF Recovery using Differentiable Monte Carlo
Rendering [20.68222611798537]
We introduce a new analysis-by-synthesis technique capable of producing high-quality reconstructions.
Unlike most previous methods that handle geometry and reflectance largely separately, our method unifies the optimization of both.
To obtain physically accurate gradient estimates, we develop a new GPU-based Monte Carlo differentiable rendering theory.
arXiv Detail & Related papers (2021-03-28T19:44:05Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.