Advances in Neural Rendering
- URL: http://arxiv.org/abs/2111.05849v1
- Date: Wed, 10 Nov 2021 18:57:01 GMT
- Title: Advances in Neural Rendering
- Authors: Ayush Tewari, Justus Thies, Ben Mildenhall, Pratul Srinivasan, Edgar
Tretschk, Yifan Wang, Christoph Lassner, Vincent Sitzmann, Ricardo
Martin-Brualla, Stephen Lombardi, Tomas Simon, Christian Theobalt, Matthias
Niessner, Jonathan T. Barron, Gordon Wetzstein, Michael Zollhoefer, Vladislav
Golyanik
- Abstract summary: This report focuses on methods that combine classical rendering with learned 3D scene representations.
A key advantage of these methods is that they are 3D-consistent by design, enabling applications such as novel viewpoint of a captured scene.
In addition to methods that handle static scenes, we cover neural scene representations for modeling non-rigidly deforming objects.
- Score: 115.05042097988768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesizing photo-realistic images and videos is at the heart of computer
graphics and has been the focus of decades of research. Traditionally,
synthetic images of a scene are generated using rendering algorithms such as
rasterization or ray tracing, which take specifically defined representations
of geometry and material properties as input. Collectively, these inputs define
the actual scene and what is rendered, and are referred to as the scene
representation (where a scene consists of one or more objects). Example scene
representations are triangle meshes with accompanied textures (e.g., created by
an artist), point clouds (e.g., from a depth sensor), volumetric grids (e.g.,
from a CT scan), or implicit surface functions (e.g., truncated signed distance
fields). The reconstruction of such a scene representation from observations
using differentiable rendering losses is known as inverse graphics or inverse
rendering. Neural rendering is closely related, and combines ideas from
classical computer graphics and machine learning to create algorithms for
synthesizing images from real-world observations. Neural rendering is a leap
forward towards the goal of synthesizing photo-realistic image and video
content. In recent years, we have seen immense progress in this field through
hundreds of publications that show different ways to inject learnable
components into the rendering pipeline. This state-of-the-art report on
advances in neural rendering focuses on methods that combine classical
rendering principles with learned 3D scene representations, often now referred
to as neural scene representations. A key advantage of these methods is that
they are 3D-consistent by design, enabling applications such as novel viewpoint
synthesis of a captured scene. In addition to methods that handle static
scenes, we cover neural scene representations for modeling non-rigidly
deforming objects...
Related papers
- Self-supervised Learning of Neural Implicit Feature Fields for Camera Pose Refinement [32.335953514942474]
This paper proposes to jointly learn the scene representation along with a 3D dense feature field and a 2D feature extractor.
We learn the underlying geometry of the scene with an implicit field through volumetric rendering and design our feature field to leverage intermediate geometric information encoded in the implicit field.
Visual localization is then achieved by aligning the image-based features and the rendered volumetric features.
arXiv Detail & Related papers (2024-06-12T17:51:53Z) - Denoising Diffusion via Image-Based Rendering [54.20828696348574]
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes.
First, we introduce a new neural scene representation, IB-planes, that can efficiently and accurately represent large 3D scenes.
Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images.
arXiv Detail & Related papers (2024-02-05T19:00:45Z) - Blocks2World: Controlling Realistic Scenes with Editable Primitives [5.541644538483947]
We present Blocks2World, a novel method for 3D scene rendering and editing.
Our technique begins by extracting 3D parallelepipeds from various objects in a given scene using convex decomposition.
The next stage involves training a conditioned model that learns to generate images from the 2D-rendered convex primitives.
arXiv Detail & Related papers (2023-07-07T21:38:50Z) - Neural Groundplans: Persistent Neural Scene Representations from a
Single Image [90.04272671464238]
We present a method to map 2D image observations of a scene to a persistent 3D scene representation.
We propose conditional neural groundplans as persistent and memory-efficient scene representations.
arXiv Detail & Related papers (2022-07-22T17:41:24Z) - GIRAFFE: Representing Scenes as Compositional Generative Neural Feature
Fields [45.21191307444531]
Deep generative models allow for photorealistic image synthesis at high resolutions.
But for many applications, this is not enough: content creation also needs to be controllable.
Our key hypothesis is that incorporating a compositional 3D scene representation into the generative model leads to more controllable image synthesis.
arXiv Detail & Related papers (2020-11-24T14:14:15Z) - Neural Scene Graphs for Dynamic Scenes [57.65413768984925]
We present the first neural rendering method that decomposes dynamic scenes into scene graphs.
We learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function.
arXiv Detail & Related papers (2020-11-20T12:37:10Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.