State of the Art on Neural Rendering
- URL: http://arxiv.org/abs/2004.03805v1
- Date: Wed, 8 Apr 2020 04:36:31 GMT
- Title: State of the Art on Neural Rendering
- Authors: Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen
Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason
Saragih, Matthias Nie{\ss}ner, Rohit Pandey, Sean Fanello, Gordon Wetzstein,
Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B
Goldman, Michael Zollh\"ofer
- Abstract summary: We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photo-realistic outputs.
This report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence.
- Score: 141.22760314536438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient rendering of photo-realistic virtual worlds is a long standing
effort of computer graphics. Modern graphics techniques have succeeded in
synthesizing photo-realistic images from hand-crafted scene representations.
However, the automatic generation of shape, materials, lighting, and other
aspects of scenes remains a challenging problem that, if solved, would make
photo-realistic computer graphics more widely accessible. Concurrently,
progress in computer vision and machine learning have given rise to a new
approach to image synthesis and editing, namely deep generative models. Neural
rendering is a new and rapidly emerging field that combines generative machine
learning techniques with physical knowledge from computer graphics, e.g., by
the integration of differentiable rendering into network training. With a
plethora of applications in computer graphics and vision, neural rendering is
poised to become a new area in the graphics community, yet no survey of this
emerging field exists. This state-of-the-art report summarizes the recent
trends and applications of neural rendering. We focus on approaches that
combine classic computer graphics techniques with deep generative models to
obtain controllable and photo-realistic outputs. Starting with an overview of
the underlying computer graphics and machine learning concepts, we discuss
critical aspects of neural rendering approaches. This state-of-the-art report
is focused on the many important use cases for the described algorithms such as
novel view synthesis, semantic photo manipulation, facial and body reenactment,
relighting, free-viewpoint video, and the creation of photo-realistic avatars
for virtual and augmented reality telepresence. Finally, we conclude with a
discussion of the social implications of such technology and investigate open
research problems.
Related papers
- Recent Trends in 3D Reconstruction of General Non-Rigid Scenes [104.07781871008186]
Reconstructing models of the real world, including 3D geometry, appearance, and motion of real scenes, is essential for computer graphics and computer vision.
It enables the synthesizing of photorealistic novel views, useful for the movie industry and AR/VR applications.
This state-of-the-art report (STAR) offers the reader a comprehensive summary of state-of-the-art techniques with monocular and multi-view inputs.
arXiv Detail & Related papers (2024-03-22T09:46:11Z) - Neural Rendering and Its Hardware Acceleration: A Review [39.6466512858213]
Neural rendering is a new image and video generation method based on deep learning.
In this paper, we review the technical connotation, main challenges, and research progress of neural rendering.
arXiv Detail & Related papers (2024-01-06T07:57:11Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Real-time Virtual-Try-On from a Single Example Image through Deep
Inverse Graphics and Learned Differentiable Renderers [13.894134334543363]
We propose a novel framework based on deep learning to build a real-time inverse graphics encoder.
Our imitator is a generative network that learns to accurately reproduce the behavior of a given non-differentiable image.
Our framework enables novel applications where consumers can virtually try-on a novel unknown product from an inspirational reference image.
arXiv Detail & Related papers (2022-05-12T18:44:00Z) - Neural Fields in Visual Computing and Beyond [54.950885364735804]
Recent advances in machine learning have created increasing interest in solving visual computing problems using coordinate-based neural networks.
neural fields have seen successful application in the synthesis of 3D shapes and image, animation of human bodies, 3D reconstruction, and pose estimation.
This report provides context, mathematical grounding, and an extensive review of literature on neural fields.
arXiv Detail & Related papers (2021-11-22T18:57:51Z) - Advances in Neural Rendering [115.05042097988768]
This report focuses on methods that combine classical rendering with learned 3D scene representations.
A key advantage of these methods is that they are 3D-consistent by design, enabling applications such as novel viewpoint of a captured scene.
In addition to methods that handle static scenes, we cover neural scene representations for modeling non-rigidly deforming objects.
arXiv Detail & Related papers (2021-11-10T18:57:01Z) - Fast Training of Neural Lumigraph Representations using Meta Learning [109.92233234681319]
We develop a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time.
Our approach, MetaNLR++, accomplishes this by using a unique combination of a neural shape representation and 2D CNN-based image feature extraction, aggregation, and re-projection.
We show that MetaNLR++ achieves similar or better photorealistic novel view synthesis results in a fraction of the time that competing methods require.
arXiv Detail & Related papers (2021-06-28T18:55:50Z) - High Resolution Zero-Shot Domain Adaptation of Synthetically Rendered
Face Images [10.03187850132035]
We propose an algorithm that matches a non-photorealistic, synthetically generated image to a latent vector of a pretrained StyleGAN2 model.
In contrast to most previous work, we require no synthetic training data.
This is the first algorithm of its kind to work at a resolution of 1K and represents a significant leap forward in visual realism.
arXiv Detail & Related papers (2020-06-26T15:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.