NeROIC: Neural Rendering of Objects from Online Image Collections
- URL: http://arxiv.org/abs/2201.02533v1
- Date: Fri, 7 Jan 2022 16:45:15 GMT
- Title: NeROIC: Neural Rendering of Objects from Online Image Collections
- Authors: Zhengfei Kuang, Kyle Olszewski, Menglei Chai, Zeng Huang, Panos
Achlioptas, Sergey Tulyakov
- Abstract summary: We present a novel method to acquire object representations from online image collections, capturing high-quality geometry and material properties of arbitrary objects.
This enables various object-centric rendering applications such as novel-view synthesis, relighting, and harmonized background composition.
- Score: 42.02832046768925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel method to acquire object representations from online image
collections, capturing high-quality geometry and material properties of
arbitrary objects from photographs with varying cameras, illumination, and
backgrounds. This enables various object-centric rendering applications such as
novel-view synthesis, relighting, and harmonized background composition from
challenging in-the-wild input. Using a multi-stage approach extending neural
radiance fields, we first infer the surface geometry and refine the coarsely
estimated initial camera parameters, while leveraging coarse foreground object
masks to improve the training efficiency and geometry quality. We also
introduce a robust normal estimation technique which eliminates the effect of
geometric noise while retaining crucial details. Lastly, we extract surface
material properties and ambient illumination, represented in spherical
harmonics with extensions that handle transient elements, e.g. sharp shadows.
The union of these components results in a highly modular and efficient object
acquisition framework. Extensive evaluations and comparisons demonstrate the
advantages of our approach in capturing high-quality geometry and appearance
properties useful for rendering applications.
Related papers
- Triplet: Triangle Patchlet for Mesh-Based Inverse Rendering and Scene Parameters Approximation [0.0]
inverse rendering seeks to derive the physical properties of a scene, including light, geometry, textures, and materials.
Meshes, as a traditional representation adopted by many simulation pipeline, still show limited influence in radiance field for inverse rendering.
This paper introduces a novel framework called Triangle Patchlet (abbr. Triplet), a mesh-based representation, to comprehensively approximate these parameters.
arXiv Detail & Related papers (2024-10-16T09:59:11Z) - PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - Relighting Scenes with Object Insertions in Neural Radiance Fields [24.18050535794117]
We propose a novel NeRF-based pipeline for inserting object NeRFs into scene NeRFs.
The proposed method achieves realistic relighting effects in extensive experimental evaluations.
arXiv Detail & Related papers (2024-06-21T00:58:58Z) - RMAFF-PSN: A Residual Multi-Scale Attention Feature Fusion Photometric Stereo Network [37.759675702107586]
Predicting accurate maps of objects from two-dimensional images in regions of complex structure spatial material variations is challenging.
We propose a method of calibrated feature information from different resolution stages and scales of the image.
This approach preserves more physical information, such as texture and geometry of the object in complex regions.
arXiv Detail & Related papers (2024-04-11T14:05:37Z) - Curved Diffusion: A Generative Model With Optical Geometry Control [56.24220665691974]
The influence of different optical systems on the final scene appearance is frequently overlooked.
This study introduces a framework that intimately integrates a textto-image diffusion model with the particular lens used in image rendering.
arXiv Detail & Related papers (2023-11-29T13:06:48Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for
Single-Image Novel View Synthesis [52.546998369121354]
We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images.
We propose to incorporate explicit geometry reasoning and combine it with pixel-aligned features for radiance field prediction.
We show that the introduction of such geometry-aware features helps to achieve a better disentanglement between appearance and geometry.
arXiv Detail & Related papers (2022-02-10T07:39:47Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - PhySG: Inverse Rendering with Spherical Gaussians for Physics-based
Material Editing and Relighting [60.75436852495868]
We present PhySG, an inverse rendering pipeline that reconstructs geometry, materials, and illumination from scratch from RGB input images.
We demonstrate, with both synthetic and real data, that our reconstructions not only enable rendering of novel viewpoints, but also physics-based appearance editing of materials and illumination.
arXiv Detail & Related papers (2021-04-01T17:59:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.