Learning Neural Light Transport
- URL: http://arxiv.org/abs/2006.03427v1
- Date: Fri, 5 Jun 2020 13:26:05 GMT
- Title: Learning Neural Light Transport
- Authors: Paul Sanzenbacher, Lars Mescheder, Andreas Geiger
- Abstract summary: We present an approach for learning light transport in static and dynamic 3D scenes using a neural network.
We find that our model is able to produce photorealistic renderings of static and dynamic scenes.
- Score: 28.9247002210861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, deep generative models have gained significance due to their
ability to synthesize natural-looking images with applications ranging from
virtual reality to data augmentation for training computer vision models. While
existing models are able to faithfully learn the image distribution of the
training set, they often lack controllability as they operate in 2D pixel space
and do not model the physical image formation process. In this work, we
investigate the importance of 3D reasoning for photorealistic rendering. We
present an approach for learning light transport in static and dynamic 3D
scenes using a neural network with the goal of predicting photorealistic
images. In contrast to existing approaches that operate in the 2D image domain,
our approach reasons in both 3D and 2D space, thus enabling global illumination
effects and manipulation of 3D scene geometry. Experimentally, we find that our
model is able to produce photorealistic renderings of static and dynamic
scenes. Moreover, it compares favorably to baselines which combine path tracing
and image denoising at the same computational budget.
Related papers
- PonderV2: Pave the Way for 3D Foundation Model with A Universal
Pre-training Paradigm [114.47216525866435]
We introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation.
For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks, implying its effectiveness.
arXiv Detail & Related papers (2023-10-12T17:59:57Z) - GINA-3D: Learning to Generate Implicit Neural Assets in the Wild [38.51391650845503]
GINA-3D is a generative model that uses real-world driving data from camera and LiDAR sensors to create 3D implicit neural assets of diverse vehicles and pedestrians.
We construct a large-scale object-centric dataset containing over 1.2M images of vehicles and pedestrians.
We demonstrate that it achieves state-of-the-art performance in quality and diversity for both generated images and geometries.
arXiv Detail & Related papers (2023-04-04T23:41:20Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - Beyond Flatland: Pre-training with a Strong 3D Inductive Bias [5.577231009305908]
Kataoka et al., 2020 introduced a technique to eliminate the need for natural images in supervised deep learning.
We take inspiration from their work and build on this idea using 3D procedural object renders.
Similar to the previous work, our training corpus will be fully synthetic and derived from simple procedural strategies.
arXiv Detail & Related papers (2021-11-30T21:30:24Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z) - Photorealism in Driving Simulations: Blending Generative Adversarial
Image Synthesis with Rendering [0.0]
We introduce a hybrid generative neural graphics pipeline for improving the visual fidelity of driving simulations.
We form 2D semantic images from 3D scenery consisting of simple object models without textures.
These semantic images are then converted into photorealistic RGB images with a state-of-the-art Generative Adrial Network (GAN) trained on real-world driving scenes.
arXiv Detail & Related papers (2020-07-31T03:25:17Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.