Object-based Illumination Estimation with Rendering-aware Neural
Networks
- URL: http://arxiv.org/abs/2008.02514v1
- Date: Thu, 6 Aug 2020 08:23:19 GMT
- Title: Object-based Illumination Estimation with Rendering-aware Neural
Networks
- Authors: Xin Wei, Guojun Chen, Yue Dong, Stephen Lin and Xin Tong
- Abstract summary: We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
- Score: 56.01734918693844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a scheme for fast environment light estimation from the RGBD
appearance of individual objects and their local image areas. Conventional
inverse rendering is too computationally demanding for real-time applications,
and the performance of purely learning-based techniques may be limited by the
meager input data available from individual objects. To address these issues,
we propose an approach that takes advantage of physical principles from inverse
rendering to constrain the solution, while also utilizing neural networks to
expedite the more computationally expensive portions of its processing, to
increase robustness to noisy input data as well as to improve temporal and
spatial stability. This results in a rendering-aware system that estimates the
local illumination distribution at an object with high accuracy and in real
time. With the estimated lighting, virtual objects can be rendered in AR
scenarios with shading that is consistent to the real scene, leading to
improved realism.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - LitAR: Visually Coherent Lighting for Mobile Augmented Reality [24.466149552743516]
We present the design and implementation of a lighting reconstruction framework called LitAR.
LitAR addresses several challenges of supporting lighting information for mobile AR.
arXiv Detail & Related papers (2023-01-15T20:47:38Z) - FoVolNet: Fast Volume Rendering using Foveated Deep Neural Networks [33.489890950757975]
FoVolNet is a method to significantly increase the performance of volume data visualization.
We develop a cost-effective foveated rendering pipeline that sparsely samples a volume around a focal point and reconstructs the full-frame using a deep neural network.
arXiv Detail & Related papers (2022-09-20T19:48:56Z) - RISP: Rendering-Invariant State Predictor with Differentiable Simulation
and Rendering for Cross-Domain Parameter Estimation [110.4255414234771]
Existing solutions require massive training data or lack generalizability to unknown rendering configurations.
We propose a novel approach that marries domain randomization and differentiable rendering gradients to address this problem.
Our approach achieves significantly lower reconstruction errors and has better generalizability among unknown rendering configurations.
arXiv Detail & Related papers (2022-05-11T17:59:51Z) - Combining Local and Global Pose Estimation for Precise Tracking of
Similar Objects [2.861848675707602]
We present a multi-object 6D detection and tracking pipeline for potentially similar and non-textured objects.
A new network architecture, trained solely with synthetic images, allows simultaneous pose estimation of multiple objects.
We show how the system can be used in a real AR assistance application within the field of construction.
arXiv Detail & Related papers (2022-01-31T14:36:57Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - HDR Environment Map Estimation for Real-Time Augmented Reality [7.6146285961466]
We present a method to estimate an HDR environment map from a narrow field-of-view LDR camera image in real-time.
This enables perceptually appealing reflections and shading on virtual objects of any material finish, from mirror to diffuse, rendered into a real physical environment using augmented reality.
arXiv Detail & Related papers (2020-11-21T01:01:53Z) - Dynamic Object Removal and Spatio-Temporal RGB-D Inpainting via
Geometry-Aware Adversarial Learning [9.150245363036165]
Dynamic objects have a significant impact on the robot's perception of the environment.
In this work, we address this problem by synthesizing plausible color, texture and geometry in regions occluded by dynamic objects.
We optimize our architecture using adversarial training to synthesize fine realistic textures which enables it to hallucinate color and depth structure in occluded regions online.
arXiv Detail & Related papers (2020-08-12T01:23:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.