Fast Training of Neural Lumigraph Representations using Meta Learning
- URL: http://arxiv.org/abs/2106.14942v1
- Date: Mon, 28 Jun 2021 18:55:50 GMT
- Title: Fast Training of Neural Lumigraph Representations using Meta Learning
- Authors: Alexander W. Bergman and Petr Kellnhofer and Gordon Wetzstein
- Abstract summary: We develop a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time.
Our approach, MetaNLR++, accomplishes this by using a unique combination of a neural shape representation and 2D CNN-based image feature extraction, aggregation, and re-projection.
We show that MetaNLR++ achieves similar or better photorealistic novel view synthesis results in a fraction of the time that competing methods require.
- Score: 109.92233234681319
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Novel view synthesis is a long-standing problem in machine learning and
computer vision. Significant progress has recently been made in developing
neural scene representations and rendering techniques that synthesize
photorealistic images from arbitrary views. These representations, however, are
extremely slow to train and often also slow to render. Inspired by neural
variants of image-based rendering, we develop a new neural rendering approach
with the goal of quickly learning a high-quality representation which can also
be rendered in real-time. Our approach, MetaNLR++, accomplishes this by using a
unique combination of a neural shape representation and 2D CNN-based image
feature extraction, aggregation, and re-projection. To push representation
convergence times down to minutes, we leverage meta learning to learn neural
shape and image feature priors which accelerate training. The optimized shape
and image features can then be extracted using traditional graphics techniques
and rendered in real time. We show that MetaNLR++ achieves similar or better
novel view synthesis results in a fraction of the time that competing methods
require.
Related papers
- DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Volume Feature Rendering for Fast Neural Radiance Field Reconstruction [11.05302598034426]
Neural radiance fields (NeRFs) are able to synthesize realistic novel views from multi-view images captured from distinct positions and perspectives.
In NeRF's rendering pipeline, neural networks are used to represent a scene independently or transform queried learnable feature vector of a point to the expected color or density.
We propose to render the queried feature vectors of a ray first and then transform the rendered feature vector to the final pixel color by a neural network.
arXiv Detail & Related papers (2023-05-29T06:58:27Z) - Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur [68.24599239479326]
We develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality, view-consistent images.
Our model surpasses state-of-the-art point-based methods for novel view synthesis.
arXiv Detail & Related papers (2023-04-25T08:36:33Z) - Temporal Interpolation Is All You Need for Dynamic Neural Radiance
Fields [4.863916681385349]
We propose a method to train neural fields of dynamic scenes based on temporal vectors of feature.
In the neural representation, we extract from space-time inputs via multiple neural network modules and interpolate them based on time frames.
In the grid representation, space-time features are learned via four-dimensional hash grids, which remarkably reduces training time.
arXiv Detail & Related papers (2023-02-18T12:01:23Z) - KiloNeuS: Implicit Neural Representations with Real-Time Global
Illumination [1.5749416770494706]
We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates.
KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes.
arXiv Detail & Related papers (2022-06-22T07:33:26Z) - Neural Adaptive SCEne Tracing [24.781844909539686]
We present NAScenT, the first neural rendering method based on directly training a hybrid explicit-implicit neural representation.
NAScenT is capable of reconstructing challenging scenes including both large, sparsely populated volumes like UAV captured outdoor environments.
arXiv Detail & Related papers (2022-02-28T10:27:23Z) - Neural Knitworks: Patched Neural Implicit Representation Networks [1.0470286407954037]
We propose Knitwork, an architecture for neural implicit representation learning of natural images that achieves image synthesis.
To the best of our knowledge, this is the first implementation of a coordinate-based patch tailored for synthesis tasks such as image inpainting, super-resolution, and denoising.
The results show that modeling natural images using patches, rather than pixels, produces results of higher fidelity.
arXiv Detail & Related papers (2021-09-29T13:10:46Z) - Neural Rays for Occlusion-aware Image-based Rendering [108.34004858785896]
We present a new neural representation, called Neural Ray (NeuRay), for the novel view synthesis (NVS) task with multi-view images as input.
NeuRay can quickly generate high-quality novel view rendering images of unseen scenes with little finetuning.
arXiv Detail & Related papers (2021-07-28T15:09:40Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Neural Human Video Rendering by Learning Dynamic Textures and
Rendering-to-Video Translation [99.64565200170897]
We propose a novel human video synthesis method by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of the human in 2D screen space.
We show several applications of our approach, such as human reenactment and novel view synthesis from monocular video, where we show significant improvement over the state of the art both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-01-14T18:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.