DarkGS: Learning Neural Illumination and 3D Gaussians Relighting for Robotic Exploration in the Dark
- URL: http://arxiv.org/abs/2403.10814v2
- Date: Mon, 2 Sep 2024 00:54:47 GMT
- Title: DarkGS: Learning Neural Illumination and 3D Gaussians Relighting for Robotic Exploration in the Dark
- Authors: Tianyi Zhang, Kaining Huang, Weiming Zhi, Matthew Johnson-Roberson,
- Abstract summary: We tackle the challenge of constructing a photorealistic scene representation under poorly illuminated conditions and with a moving light source.
We introduce an innovative framework that uses a data-driven approach, Neural Light Simulators (NeLiS) to model and calibrate the camera-light system.
We show the applicability and robustness of our proposed simulator and system in a variety of real-world environments.
- Score: 14.47850251126128
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans have the remarkable ability to construct consistent mental models of an environment, even under limited or varying levels of illumination. We wish to endow robots with this same capability. In this paper, we tackle the challenge of constructing a photorealistic scene representation under poorly illuminated conditions and with a moving light source. We approach the task of modeling illumination as a learning problem, and utilize the developed illumination model to aid in scene reconstruction. We introduce an innovative framework that uses a data-driven approach, Neural Light Simulators (NeLiS), to model and calibrate the camera-light system. Furthermore, we present DarkGS, a method that applies NeLiS to create a relightable 3D Gaussian scene model capable of real-time, photorealistic rendering from novel viewpoints. We show the applicability and robustness of our proposed simulator and system in a variety of real-world environments.
Related papers
- LumiGauss: High-Fidelity Outdoor Relighting with 2D Gaussian Splatting [15.11759492990967]
We introduce LumiGauss, a technique that tackles 3D reconstruction of scenes and environmental lighting through 2D Gaussian Splatting.
Our approach yields high-quality scene reconstructions and enables realistic lighting synthesis under novel environment maps.
arXiv Detail & Related papers (2024-08-06T23:41:57Z) - GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - LightSim: Neural Lighting Simulation for Urban Scenes [42.84064522536041]
Different outdoor illumination conditions drastically alter the appearance of urban scenes, and they can harm the performance of image-based robot perception systems.
Camera simulation provides a cost-effective solution to create a large dataset of images captured under different lighting conditions.
We propose LightSim, a neural lighting camera simulation system that enables diverse, realistic, and controllable data generation.
arXiv Detail & Related papers (2023-12-11T18:59:13Z) - FaceLit: Neural 3D Relightable Faces [28.0806453092185]
FaceLit is capable of generating a 3D face that can be rendered at various user-defined lighting conditions and views.
We show state-of-the-art photorealism among 3D aware GANs on FFHQ dataset achieving an FID score of 3.5.
arXiv Detail & Related papers (2023-03-27T17:59:10Z) - RANA: Relightable Articulated Neural Avatars [83.60081895984634]
We propose RANA, a relightable and articulated neural avatar for the photorealistic synthesis of humans.
We present a novel framework to model humans while disentangling their geometry, texture, and also lighting environment from monocular RGB videos.
arXiv Detail & Related papers (2022-12-06T18:59:31Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Texture Generation Using Graph Generative Adversarial Network And
Differentiable Rendering [0.6439285904756329]
Novel texture synthesis for existing 3D mesh models is an important step towards photo realistic asset generation for simulators.
Existing methods inherently work in the 2D image space which is the projection of the 3D space from a given camera perspective.
We present a new system called a graph generative adversarial network (GGAN) that can generate textures which can be directly integrated into a given 3D mesh models with tools like Blender and Unreal Engine.
arXiv Detail & Related papers (2022-06-17T04:56:03Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.