DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer
- URL: http://arxiv.org/abs/2111.00140v1
- Date: Sat, 30 Oct 2021 01:59:39 GMT
- Title: DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer
- Authors: Wenzheng Chen and Joey Litalien and Jun Gao and Zian Wang and Clement
Fuji Tsang and Sameh Khamis and Or Litany and Sanja Fidler
- Abstract summary: We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
- Score: 78.91753256634453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the challenging problem of predicting intrinsic object properties
from a single image by exploiting differentiable renderers. Many previous
learning-based approaches for inverse graphics adopt rasterization-based
renderers and assume naive lighting and material models, which often fail to
account for non-Lambertian, specular reflections commonly observed in the wild.
In this work, we propose DIBR++, a hybrid differentiable renderer which
supports these photorealistic effects by combining rasterization and
ray-tracing, taking the advantage of their respective strengths -- speed and
realism. Our renderer incorporates environmental lighting and spatially-varying
material models to efficiently approximate light transport, either through
direct estimation or via spherical basis functions. Compared to more advanced
physics-based differentiable renderers leveraging path tracing, DIBR++ is
highly performant due to its compact and expressive shading model, which
enables easy integration with learning frameworks for geometry, reflectance and
lighting prediction from a single image without requiring any ground-truth. We
experimentally demonstrate that our approach achieves superior material and
lighting disentanglement on synthetic and real data compared to existing
rasterization-based approaches and showcase several artistic applications
including material editing and relighting.
Related papers
- MIRReS: Multi-bounce Inverse Rendering using Reservoir Sampling [17.435649250309904]
We present MIRReS, a novel two-stage inverse rendering framework.
Our method extracts an explicit geometry (triangular mesh) in stage one, and introduces a more realistic physically-based inverse rendering model.
Our method effectively estimates indirect illumination, including self-shadowing and internal reflections.
arXiv Detail & Related papers (2024-06-24T07:00:57Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - NDJIR: Neural Direct and Joint Inverse Rendering for Geometry, Lights,
and Materials of Real Object [5.665283675533071]
We propose neural direct and joint inverse rendering, NDJIR.
Our proposed method can decompose semantically well for real object in photogrammetric setting.
arXiv Detail & Related papers (2023-02-02T13:21:03Z) - Physics-based Indirect Illumination for Inverse Rendering [70.27534648770057]
We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images.
As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting.
arXiv Detail & Related papers (2022-12-09T07:33:49Z) - Learning-based Inverse Rendering of Complex Indoor Scenes with
Differentiable Monte Carlo Raytracing [27.96634370355241]
This work presents an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling.
The framework takes a single image as input to jointly recover the underlying geometry, spatially-varying lighting, and photorealistic materials.
arXiv Detail & Related papers (2022-11-06T03:34:26Z) - PhySG: Inverse Rendering with Spherical Gaussians for Physics-based
Material Editing and Relighting [60.75436852495868]
We present PhySG, an inverse rendering pipeline that reconstructs geometry, materials, and illumination from scratch from RGB input images.
We demonstrate, with both synthetic and real data, that our reconstructions not only enable rendering of novel viewpoints, but also physics-based appearance editing of materials and illumination.
arXiv Detail & Related papers (2021-04-01T17:59:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.