Light Field Implicit Representation for Flexible Resolution
Reconstruction
- URL: http://arxiv.org/abs/2112.00185v1
- Date: Tue, 30 Nov 2021 23:59:02 GMT
- Title: Light Field Implicit Representation for Flexible Resolution
Reconstruction
- Authors: Paramanand Chandramouli, Hendrik Sommerhoff, Andreas Kolb
- Abstract summary: We propose an implicit representation model for 4D light fields conditioned on a sparse set of input views.
Our model is trained to output the light field values for a continuous range of coordinates.
Experiments show that our method achieves state-of-the-art performance for the synthesis of view while being computationally fast.
- Score: 9.173467982128514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the recent advances in implicitly representing signals with
trained neural networks, we aim to learn a continuous representation for
narrow-baseline 4D light fields. We propose an implicit representation model
for 4D light fields which is conditioned on a sparse set of input views. Our
model is trained to output the light field values for a continuous range of
query spatio-angular coordinates. Given a sparse set of input views, our scheme
can super-resolve the input in both spatial and angular domains by flexible
factors. consists of a feature extractor and a decoder which are trained on a
dataset of light field patches. The feature extractor captures per-pixel
features from the input views. These features can be resized to a desired
spatial resolution and fed to the decoder along with the query coordinates.
This formulation enables us to reconstruct light field views at any desired
spatial and angular resolution. Additionally, our network can handle scenarios
in which input views are either of low-resolution or with missing pixels.
Experiments show that our method achieves state-of-the-art performance for the
task of view synthesis while being computationally fast.
Related papers
- Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling [47.86734601629109]
NDE transfers the concept of feature-grid-based spatial encoding to the angular domain.
Experiments on both synthetic and real datasets show that a NeRF model with NDE outperforms the state of the art on view synthesis of specular objects.
arXiv Detail & Related papers (2024-05-23T17:56:34Z) - NeLF-Pro: Neural Light Field Probes for Multi-Scale Novel View Synthesis [27.362216326282145]
NeLF-Pro is a novel representation to model and reconstruct light fields in diverse natural scenes.
Our central idea is to bake the scene's light field into spatially varying learnable representations.
arXiv Detail & Related papers (2023-12-20T17:18:44Z) - Learning-based Spatial and Angular Information Separation for Light
Field Compression [29.827366575505557]
We propose a novel neural network that can separate angular and spatial information of a light field.
The network represents spatial information using spatial kernels shared among all Sub-Aperture Images (SAIs), and angular information using sets of angular kernels for each SAI.
arXiv Detail & Related papers (2023-04-13T08:02:38Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - Content-aware Warping for View Synthesis [110.54435867693203]
We propose content-aware warping, which adaptively learns the weights for pixels of a relatively large neighborhood from their contextual information via a lightweight neural network.
Based on this learnable warping module, we propose a new end-to-end learning-based framework for novel view synthesis from two source views.
Experimental results on structured light field datasets with wide baselines and unstructured multi-view datasets show that the proposed method significantly outperforms state-of-the-art methods both quantitatively and visually.
arXiv Detail & Related papers (2022-01-22T11:35:05Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z) - Generative Modelling of BRDF Textures from Flash Images [50.660026124025265]
We learn a latent space for easy capture, semantic editing, consistent, and efficient reproduction of visual material appearance.
In a second step, conditioned on the material code, our method produces an infinite and diverse spatial field of BRDF model parameters.
arXiv Detail & Related papers (2021-02-23T18:45:18Z) - Learning light field synthesis with Multi-Plane Images: scene encoding
as a recurrent segmentation task [30.058283056074426]
This paper addresses the problem of view synthesis from large baseline light fields by turning a sparse set of input views into a Multi-plane Image (MPI)
Because available datasets are scarce, we propose a lightweight network that does not require extensive training.
Our model does not learn to estimate RGB layers but only encodes the scene geometry within MPI alpha layers, which comes down to a segmentation task.
arXiv Detail & Related papers (2020-02-12T14:35:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.