Neural Rays for Occlusion-aware Image-based Rendering
- URL: http://arxiv.org/abs/2107.13421v1
- Date: Wed, 28 Jul 2021 15:09:40 GMT
- Title: Neural Rays for Occlusion-aware Image-based Rendering
- Authors: Yuan Liu and Sida Peng and Lingjie Liu and Qianqian Wang and Peng Wang
and Christian Theobalt and Xiaowei Zhou and Wenping Wang
- Abstract summary: We present a new neural representation, called Neural Ray (NeuRay), for the novel view synthesis (NVS) task with multi-view images as input.
NeuRay can quickly generate high-quality novel view rendering images of unseen scenes with little finetuning.
- Score: 108.34004858785896
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a new neural representation, called Neural Ray (NeuRay), for the
novel view synthesis (NVS) task with multi-view images as input. Existing
neural scene representations for solving the NVS problem, such as NeRF, cannot
generalize to new scenes and take an excessively long time on training on each
new scene from scratch. The other subsequent neural rendering methods based on
stereo matching, such as PixelNeRF, SRF and IBRNet are designed to generalize
to unseen scenes but suffer from view inconsistency in complex scenes with
self-occlusions. To address these issues, our NeuRay method represents every
scene by encoding the visibility of rays associated with the input views. This
neural representation can efficiently be initialized from depths estimated by
external MVS methods, which is able to generalize to new scenes and achieves
satisfactory rendering images without any training on the scene. Then, the
initialized NeuRay can be further optimized on every scene with little training
timing to enforce spatial coherence to ensure view consistency in the presence
of severe self-occlusion. Experiments demonstrate that NeuRay can quickly
generate high-quality novel view images of unseen scenes with little finetuning
and can handle complex scenes with severe self-occlusions which previous
methods struggle with.
Related papers
- 3D Reconstruction with Generalizable Neural Fields using Scene Priors [71.37871576124789]
We introduce training generalizable Neural Fields incorporating scene Priors (NFPs)
The NFP network maps any single-view RGB-D image into signed distance and radiance values.
A complete scene can be reconstructed by merging individual frames in the volumetric space WITHOUT a fusion module.
arXiv Detail & Related papers (2023-09-26T18:01:02Z) - KiloNeuS: Implicit Neural Representations with Real-Time Global
Illumination [1.5749416770494706]
We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates.
KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes.
arXiv Detail & Related papers (2022-06-22T07:33:26Z) - Neural Adaptive SCEne Tracing [24.781844909539686]
We present NAScenT, the first neural rendering method based on directly training a hybrid explicit-implicit neural representation.
NAScenT is capable of reconstructing challenging scenes including both large, sparsely populated volumes like UAV captured outdoor environments.
arXiv Detail & Related papers (2022-02-28T10:27:23Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views
of Novel Scenes [48.0304999503795]
We introduce Stereo Radiance Fields (SRF), a neural view synthesis approach that is trained end-to-end.
SRF generalizes to new scenes, and requires only sparse views at test time.
Experiments show that SRF learns structure instead of overfitting on a scene.
arXiv Detail & Related papers (2021-04-14T15:38:57Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z) - Neural Sparse Voxel Fields [151.20366604586403]
We introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering.
NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell.
Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF(Mildenhall et al., 2020)) at inference time while achieving higher quality results.
arXiv Detail & Related papers (2020-07-22T17:51:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.