Light Field Neural Rendering
- URL: http://arxiv.org/abs/2112.09687v1
- Date: Fri, 17 Dec 2021 18:58:05 GMT
- Title: Light Field Neural Rendering
- Authors: Mohammed Suhail, Carlos Esteves, Leonid Sigal, Ameesh Makadia
- Abstract summary: Methods based on geometric reconstruction need only sparse views, but cannot accurately model non-Lambertian effects.
We introduce a model that combines the strengths and mitigates the limitations of these two directions.
Our model outperforms the state-of-the-art on multiple forward-facing and 360deg datasets.
- Score: 47.7586443731997
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classical light field rendering for novel view synthesis can accurately
reproduce view-dependent effects such as reflection, refraction, and
translucency, but requires a dense view sampling of the scene. Methods based on
geometric reconstruction need only sparse views, but cannot accurately model
non-Lambertian effects. We introduce a model that combines the strengths and
mitigates the limitations of these two directions. By operating on a
four-dimensional representation of the light field, our model learns to
represent view-dependent effects accurately. By enforcing geometric constraints
during training and inference, the scene geometry is implicitly learned from a
sparse set of views. Concretely, we introduce a two-stage transformer-based
model that first aggregates features along epipolar lines, then aggregates
features along reference views to produce the color of a target ray. Our model
outperforms the state-of-the-art on multiple forward-facing and 360{\deg}
datasets, with larger margins on scenes with severe view-dependent variations.
Related papers
- View-consistent Object Removal in Radiance Fields [14.195400035176815]
Radiance Fields (RFs) have emerged as a crucial technology for 3D scene representation.
Current methods rely on per-frame 2D image inpainting, which often fails to maintain consistency across views.
We introduce a novel RF editing pipeline that significantly enhances consistency by requiring the inpainting of only a single reference image.
arXiv Detail & Related papers (2024-08-04T17:57:23Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - Curved Diffusion: A Generative Model With Optical Geometry Control [56.24220665691974]
The influence of different optical systems on the final scene appearance is frequently overlooked.
This study introduces a framework that intimately integrates a textto-image diffusion model with the particular lens used in image rendering.
arXiv Detail & Related papers (2023-11-29T13:06:48Z) - Learning to Render Novel Views from Wide-Baseline Stereo Pairs [26.528667940013598]
We introduce a method for novel view synthesis given only a single wide-baseline stereo image pair.
Existing approaches to novel view synthesis from sparse observations fail due to recovering incorrect 3D geometry.
We propose an efficient, image-space epipolar line sampling scheme to assemble image features for a target ray.
arXiv Detail & Related papers (2023-04-17T17:40:52Z) - Generalizable Patch-Based Neural Rendering [46.41746536545268]
We propose a new paradigm for learning models that can synthesize novel views of unseen scenes.
Our method is capable of predicting the color of a target ray in a novel scene directly, just from a collection of patches sampled from the scene.
We show that our approach outperforms the state-of-the-art on novel view synthesis of unseen scenes even when being trained with considerably less data than prior work.
arXiv Detail & Related papers (2022-07-21T17:57:04Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from
Sparse Inputs [79.00855490550367]
We show that NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available.
We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints.
Our model outperforms not only other methods that optimize over a single scene, but also conditional models that are extensively pre-trained on large multi-view datasets.
arXiv Detail & Related papers (2021-12-01T18:59:46Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.