Inverse Rendering of Translucent Objects using Physical and Neural
Renderers
- URL: http://arxiv.org/abs/2305.08336v1
- Date: Mon, 15 May 2023 04:03:11 GMT
- Title: Inverse Rendering of Translucent Objects using Physical and Neural
Renderers
- Authors: Chenhao Li, Trung Thanh Ngo, Hajime Nagahara
- Abstract summary: In this work, we propose an inverse model that estimates 3D shape, spatially-varying reflectance, homogeneous scattering parameters, and an environment illumination jointly from only a pair of captured images of a translucent object.
Because two reconstructions are differentiable, we can compute a reconstruction loss to assist parameter estimation.
We constructed a large-scale synthetic dataset of translucent objects, which consists of 117K scenes.
- Score: 13.706425832518093
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose an inverse rendering model that estimates 3D shape,
spatially-varying reflectance, homogeneous subsurface scattering parameters,
and an environment illumination jointly from only a pair of captured images of
a translucent object. In order to solve the ambiguity problem of inverse
rendering, we use a physically-based renderer and a neural renderer for scene
reconstruction and material editing. Because two renderers are differentiable,
we can compute a reconstruction loss to assist parameter estimation. To enhance
the supervision of the proposed neural renderer, we also propose an augmented
loss. In addition, we use a flash and no-flash image pair as the input. To
supervise the training, we constructed a large-scale synthetic dataset of
translucent objects, which consists of 117K scenes. Qualitative and
quantitative results on both synthetic and real-world datasets demonstrated the
effectiveness of the proposed model.
Related papers
- Anisotropic Neural Representation Learning for High-Quality Neural
Rendering [0.0]
We propose an anisotropic neural representation learning method that utilizes learnable view-dependent features to improve scene representation and reconstruction.
Our method is flexiable and can be plugged into NeRF-based frameworks.
arXiv Detail & Related papers (2023-11-30T07:29:30Z) - NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects [28.62468618676557]
We propose NEMTO, the first end-to-end neural rendering pipeline to model 3D transparent objects.
With 2D images of the transparent object as input, our method is capable of high-quality novel view and relighting synthesis.
arXiv Detail & Related papers (2023-03-21T15:50:08Z) - SupeRVol: Super-Resolution Shape and Reflectance Estimation in Inverse
Volume Rendering [42.0782248214221]
SupeRVol is an inverse rendering pipeline that allows us to recover 3D shape and material parameters from a set of color images in a super-resolution manner.
It generates reconstructions that are sharper than the individual input images, making this method ideally suited for 3D modeling from low-resolution imagery.
arXiv Detail & Related papers (2022-12-09T16:30:17Z) - GAN2X: Non-Lambertian Inverse Rendering of Image GANs [85.76426471872855]
We present GAN2X, a new method for unsupervised inverse rendering that only uses unpaired images for training.
Unlike previous Shape-from-GAN approaches that mainly focus on 3D shapes, we take the first attempt to also recover non-Lambertian material properties by exploiting the pseudo paired data generated by a GAN.
Experiments demonstrate that GAN2X can accurately decompose 2D images to 3D shape, albedo, and specular properties for different object categories, and achieves the state-of-the-art performance for unsupervised single-view 3D face reconstruction.
arXiv Detail & Related papers (2022-06-18T16:58:49Z) - Differentiable Rendering for Synthetic Aperture Radar Imagery [0.0]
We propose an approach for differentiable rendering of Synthetic Aperture Radar (SAR) imagery, which combines methods from 3D computer graphics with neural rendering.
We demonstrate the approach on the inverse graphics problem of 3D Object Reconstruction from limited SAR imagery using high-fidelity simulated SAR data.
arXiv Detail & Related papers (2022-04-04T05:27:40Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Differentiable Rendering with Perturbed Optimizers [85.66675707599782]
Reasoning about 3D scenes from their 2D image projections is one of the core problems in computer vision.
Our work highlights the link between some well-known differentiable formulations and randomly smoothed renderings.
We apply our method to 3D scene reconstruction and demonstrate its advantages on the tasks of 6D pose estimation and 3D mesh reconstruction.
arXiv Detail & Related papers (2021-10-18T08:56:23Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.