Real-time Light Estimation and Neural Soft Shadows for AR Indoor
Scenarios
- URL: http://arxiv.org/abs/2308.01613v1
- Date: Thu, 3 Aug 2023 08:41:37 GMT
- Title: Real-time Light Estimation and Neural Soft Shadows for AR Indoor
Scenarios
- Authors: Alexander Sommer, Ulrich Schwanecke, Elmar Sch\"omer
- Abstract summary: We present a pipeline for embedding virtual objects into footage of indoor scenes with focus on real-time AR applications.
Our pipeline consists of two main components: A light estimator and a neural soft shadow texture generator.
We achieve runtimes of 9ms for light estimation and 5ms for neural shadows on an iPhone 11 Pro.
- Score: 70.6824004127609
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a pipeline for realistic embedding of virtual objects into footage
of indoor scenes with focus on real-time AR applications. Our pipeline consists
of two main components: A light estimator and a neural soft shadow texture
generator. Our light estimation is based on deep neural nets and determines the
main light direction, light color, ambient color and an opacity parameter for
the shadow texture. Our neural soft shadow method encodes object-based
realistic soft shadows as light direction dependent textures in a small MLP. We
show that our pipeline can be used to integrate objects into AR scenes in a new
level of realism in real-time. Our models are small enough to run on current
mobile devices. We achieve runtimes of 9ms for light estimation and 5ms for
neural shadows on an iPhone 11 Pro.
Related papers
- Real-Time Neural Rasterization for Large Scenes [39.198327570559684]
We propose a new method for realistic real-time novel-view synthesis of large scenes.
Existing neural rendering methods generate realistic results, but primarily work for small scale scenes.
Our work is the first to enable real-time rendering of large real-world scenes.
arXiv Detail & Related papers (2023-11-09T18:59:10Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Light Sampling Field and BRDF Representation for Physically-based Neural
Rendering [4.440848173589799]
Physically-based rendering (PBR) is key for immersive rendering effects used widely in the industry to showcase detailed realistic scenes from computer graphics assets.
This paper proposes a novel lighting representation that models direct and indirect light locally through a light sampling strategy in a learned light sampling field.
We then implement our proposed representations with an end-to-end physically-based neural face skin shader, which takes a standard face asset and an HDRI for illumination as inputs and generates a photo-realistic rendering as output.
arXiv Detail & Related papers (2023-04-11T19:54:50Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Neural Assets: Volumetric Object Capture and Rendering for Interactive
Environments [8.258451067861932]
We propose an approach for capturing real-world objects in everyday environments faithfully and fast.
We use a novel neural representation to reconstruct effects, such as translucent object parts, and preserve object appearance.
This leads to a seamless integration of the proposed neural assets with existing mesh environments and objects.
arXiv Detail & Related papers (2022-12-12T18:55:03Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Xihe: A 3D Vision-based Lighting Estimation Framework for Mobile
Augmented Reality [9.129335351176904]
We design an edge-assisted framework called Xihe to provide mobile AR applications the ability to obtain accurate omnidirectional lighting estimation in real time.
We develop a tailored GPU pipeline for on-device point cloud processing and use an encoding technique that reduces network transmitted bytes.
Our results show that Xihe takes as fast as 20.67ms per lighting estimation and achieves 9.4% better estimation accuracy than a state-of-the-art neural network.
arXiv Detail & Related papers (2021-05-30T13:48:29Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.