Learning Generalizable Light Field Networks from Few Images
- URL: http://arxiv.org/abs/2207.11757v1
- Date: Sun, 24 Jul 2022 14:47:11 GMT
- Title: Learning Generalizable Light Field Networks from Few Images
- Authors: Qian Li, Franck Multon, Adnane Boukhayma
- Abstract summary: We present a new strategy for few-shot novel view synthesis based on a neural light field representation.
We show that our method achieves competitive performances on synthetic and real MVS data with respect to state-of-the-art neural radiance field based competition.
- Score: 7.672380267651058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore a new strategy for few-shot novel view synthesis based on a neural
light field representation. Given a target camera pose, an implicit neural
network maps each ray to its target pixel's color directly. The network is
conditioned on local ray features generated by coarse volumetric rendering from
an explicit 3D feature volume. This volume is built from the input images using
a 3D ConvNet. Our method achieves competitive performances on synthetic and
real MVS data with respect to state-of-the-art neural radiance field based
competition, while offering a 100 times faster rendering.
Related papers
- N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D
Light Field [69.90548694719683]
We propose an analysis-synthesis approach called Relit-NeuLF.
We first parameterize each ray in a 4D coordinate system, enabling efficient learning and inference.
Comprehensive experiments demonstrate that the proposed method is efficient and effective on both synthetic data and real-world human face data.
arXiv Detail & Related papers (2023-10-23T07:29:51Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - TermiNeRF: Ray Termination Prediction for Efficient Neural Rendering [18.254077751772005]
Volume rendering using neural fields has shown great promise in capturing and synthesizing novel views of 3D scenes.
This type of approach requires querying the volume network at multiple points along each viewing ray in order to render an image, resulting in very slow rendering times.
We present a method that overcomes this limitation by learning a direct mapping from camera rays to locations along the ray that are most likely to influence the pixel's final appearance.
arXiv Detail & Related papers (2021-11-05T17:50:44Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.