Differentiable Inverse Rendering with Interpretable Basis BRDFs
- URL: http://arxiv.org/abs/2411.17994v2
- Date: Sun, 01 Dec 2024 22:35:05 GMT
- Title: Differentiable Inverse Rendering with Interpretable Basis BRDFs
- Authors: Hoon-Gyu Chung, Seokjun Choi, Seung-Hwan Baek,
- Abstract summary: Inverse rendering seeks to reconstruct both geometry and spatially varying BRDFs from captured images.
In this paper, we introduce a differentiable inverse rendering method that produces interpretable basis BRDFs.
- Score: 9.88708409803907
- License:
- Abstract: Inverse rendering seeks to reconstruct both geometry and spatially varying BRDFs (SVBRDFs) from captured images. To address the inherent ill-posedness of inverse rendering, basis BRDF representations are commonly used, modeling SVBRDFs as spatially varying blends of a set of basis BRDFs. However, existing methods often yield basis BRDFs that lack intuitive separation and have limited scalability to scenes of varying complexity. In this paper, we introduce a differentiable inverse rendering method that produces interpretable basis BRDFs. Our approach models a scene using 2D Gaussians, where the reflectance of each Gaussian is defined by a weighted blend of basis BRDFs. We efficiently render an image from the 2D Gaussians and basis BRDFs using differentiable rasterization and impose a rendering loss with the input images. During this analysis-by-synthesis optimization process of differentiable inverse rendering, we dynamically adjust the number of basis BRDFs to fit the target scene while encouraging sparsity in the basis weights. This ensures that the reflectance of each Gaussian is represented by only a few basis BRDFs. This approach enables the reconstruction of accurate geometry and interpretable basis BRDFs that are spatially separated. Consequently, the resulting scene representation, comprising basis BRDFs and 2D Gaussians, supports physically-based novel-view relighting and intuitive scene editing.
Related papers
- Differentiable Point-based Inverse Rendering [9.88708409803907]
DPIR is an analysis-by-synthesis method that processes images captured under diverse illuminations to estimate shape and spatially-varying BRDF.
We devise a hybrid point-volumetric representation for geometry and a regularized basis-BRDF representation for reflectance.
Our evaluations demonstrate that DPIR outperforms prior works in terms of reconstruction accuracy, computational efficiency, and memory footprint.
arXiv Detail & Related papers (2023-12-05T04:13:31Z) - Relightable 3D Gaussians: Realistic Point Cloud Relighting with BRDF Decomposition and Ray Tracing [21.498078188364566]
We present a novel differentiable point-based rendering framework to achieve photo-realistic relighting.
The proposed framework showcases the potential to revolutionize the mesh-based graphics pipeline with a point-based pipeline enabling editing, tracing, and relighting.
arXiv Detail & Related papers (2023-11-27T18:07:58Z) - Towards Real-World Burst Image Super-Resolution: Benchmark and Method [93.73429028287038]
In this paper, we establish a large-scale real-world burst super-resolution dataset, i.e., RealBSR, to explore the faithful reconstruction of image details from multiple frames.
We also introduce a Federated Burst Affinity network (FBAnet) to investigate non-trivial pixel-wise displacement among images under real-world image degradation.
arXiv Detail & Related papers (2023-09-09T14:11:37Z) - Differentiable Rendering of Neural SDFs through Reparameterization [32.47993049026182]
We present a method to automatically compute correct gradients with respect to geometric scene parameters in neural SDFs.
Our approach builds on area-sampling techniques and develops a continuous warping function for SDFs to account for discontinuities.
Our differentiable can be used to optimize neural shapes from multi-view images and produces comparable 3D reconstructions.
arXiv Detail & Related papers (2022-06-10T20:30:26Z) - RISP: Rendering-Invariant State Predictor with Differentiable Simulation
and Rendering for Cross-Domain Parameter Estimation [110.4255414234771]
Existing solutions require massive training data or lack generalizability to unknown rendering configurations.
We propose a novel approach that marries domain randomization and differentiable rendering gradients to address this problem.
Our approach achieves significantly lower reconstruction errors and has better generalizability among unknown rendering configurations.
arXiv Detail & Related papers (2022-05-11T17:59:51Z) - Neural BRDFs: Representation and Operations [25.94375378662899]
Bidirectional reflectance distribution functions (BRDFs) are pervasively used in computer graphics to produce realistic physically-based appearance.
We present a form of "Neural BRDF algebra", and focus on both representation and operations of BRDFs at the same time.
arXiv Detail & Related papers (2021-11-06T03:50:02Z) - PhySG: Inverse Rendering with Spherical Gaussians for Physics-based
Material Editing and Relighting [60.75436852495868]
We present PhySG, an inverse rendering pipeline that reconstructs geometry, materials, and illumination from scratch from RGB input images.
We demonstrate, with both synthetic and real data, that our reconstructions not only enable rendering of novel viewpoints, but also physics-based appearance editing of materials and illumination.
arXiv Detail & Related papers (2021-04-01T17:59:02Z) - Generative Modelling of BRDF Textures from Flash Images [50.660026124025265]
We learn a latent space for easy capture, semantic editing, consistent, and efficient reproduction of visual material appearance.
In a second step, conditioned on the material code, our method produces an infinite and diverse spatial field of BRDF model parameters.
arXiv Detail & Related papers (2021-02-23T18:45:18Z) - Neural BRDF Representation and Importance Sampling [79.84316447473873]
We present a compact neural network-based representation of reflectance BRDF data.
We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling.
We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets.
arXiv Detail & Related papers (2021-02-11T12:00:24Z) - Geometric Correspondence Fields: Learned Differentiable Rendering for 3D
Pose Refinement in the Wild [96.09941542587865]
We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild.
In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates.
We evaluate our approach on the challenging Pix3D dataset and achieve up to 55% relative improvement compared to state-of-the-art refinement methods in multiple metrics.
arXiv Detail & Related papers (2020-07-17T12:34:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.