Differentiable Point-based Inverse Rendering
- URL: http://arxiv.org/abs/2312.02480v2
- Date: Mon, 25 Mar 2024 06:22:09 GMT
- Title: Differentiable Point-based Inverse Rendering
- Authors: Hoon-Gyu Chung, Seokjun Choi, Seung-Hwan Baek,
- Abstract summary: DPIR is an analysis-by-synthesis method that processes images captured under diverse illuminations to estimate shape and spatially-varying BRDF.
We devise a hybrid point-volumetric representation for geometry and a regularized basis-BRDF representation for reflectance.
Our evaluations demonstrate that DPIR outperforms prior works in terms of reconstruction accuracy, computational efficiency, and memory footprint.
- Score: 9.88708409803907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present differentiable point-based inverse rendering, DPIR, an analysis-by-synthesis method that processes images captured under diverse illuminations to estimate shape and spatially-varying BRDF. To this end, we adopt point-based rendering, eliminating the need for multiple samplings per ray, typical of volumetric rendering, thus significantly enhancing the speed of inverse rendering. To realize this idea, we devise a hybrid point-volumetric representation for geometry and a regularized basis-BRDF representation for reflectance. The hybrid geometric representation enables fast rendering through point-based splatting while retaining the geometric details and stability inherent to SDF-based representations. The regularized basis-BRDF mitigates the ill-posedness of inverse rendering stemming from limited light-view angular samples. We also propose an efficient shadow detection method using point-based shadow map rendering. Our extensive evaluations demonstrate that DPIR outperforms prior works in terms of reconstruction accuracy, computational efficiency, and memory footprint. Furthermore, our explicit point-based representation and rendering enables intuitive geometry and reflectance editing.
Related papers
- RISE-SDF: a Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering [26.988572852463815]
In this paper, we propose a novel end-to-end relightable neural inverse rendering system.
Our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
Our experiments demonstrate that our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
arXiv Detail & Related papers (2024-09-30T09:42:10Z) - 3D Reconstruction with Fast Dipole Sums [12.865206085308728]
We introduce a method for high-quality 3D reconstruction from multiview images.
We represent implicit geometry and radiance fields as per-point attributes of a dense point cloud.
These queries facilitate the use of ray tracing to efficiently and differentiably render images.
arXiv Detail & Related papers (2024-05-27T03:23:25Z) - RNb-NeuS: Reflectance and Normal-based Multi-View 3D Reconstruction [3.1820300989695833]
This paper introduces a versatile paradigm for integrating multi-view reflectance and normal maps acquired through photometric stereo.
Our approach employs a pixel-wise joint re- parameterization of reflectance and normal, considering them as a vector of radiances rendered under simulated, varying illumination.
It significantly improves the detailed 3D reconstruction of areas with high curvature or low visibility.
arXiv Detail & Related papers (2023-12-02T19:49:27Z) - Relightable 3D Gaussians: Realistic Point Cloud Relighting with BRDF Decomposition and Ray Tracing [21.498078188364566]
We present a novel differentiable point-based rendering framework to achieve photo-realistic relighting.
The proposed framework showcases the potential to revolutionize the mesh-based graphics pipeline with a point-based pipeline enabling editing, tracing, and relighting.
arXiv Detail & Related papers (2023-11-27T18:07:58Z) - Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail [54.03399077258403]
We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering.
Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray.
arXiv Detail & Related papers (2023-09-19T05:44:00Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - Differentiable Rendering of Neural SDFs through Reparameterization [32.47993049026182]
We present a method to automatically compute correct gradients with respect to geometric scene parameters in neural SDFs.
Our approach builds on area-sampling techniques and develops a continuous warping function for SDFs to account for discontinuities.
Our differentiable can be used to optimize neural shapes from multi-view images and produces comparable 3D reconstructions.
arXiv Detail & Related papers (2022-06-10T20:30:26Z) - Shape As Points: A Differentiable Poisson Solver [118.12466580918172]
In this paper, we introduce a differentiable point-to-mesh layer using a differentiable formulation of Poisson Surface Reconstruction (PSR)
The differentiable PSR layer allows us to efficiently and differentiably bridge the explicit 3D point representation with the 3D mesh via the implicit indicator field.
Compared to neural implicit representations, our Shape-As-Points (SAP) model is more interpretable, lightweight, and accelerates inference time by one order of magnitude.
arXiv Detail & Related papers (2021-06-07T09:28:38Z) - Efficient and Differentiable Shadow Computation for Inverse Problems [64.70468076488419]
Differentiable geometric computation has received increasing interest for image-based inverse problems.
We propose an efficient yet efficient approach for differentiable visibility and soft shadow computation.
As our formulation is differentiable, it can be used to solve inverse problems such as texture, illumination, rigid pose, and deformation recovery from images.
arXiv Detail & Related papers (2021-04-01T09:29:05Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z) - Neural BRDF Representation and Importance Sampling [79.84316447473873]
We present a compact neural network-based representation of reflectance BRDF data.
We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling.
We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets.
arXiv Detail & Related papers (2021-02-11T12:00:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.