Learning Robust Generalizable Radiance Field with Visibility and Feature
Augmented Point Representation
- URL: http://arxiv.org/abs/2401.14354v1
- Date: Thu, 25 Jan 2024 17:58:51 GMT
- Title: Learning Robust Generalizable Radiance Field with Visibility and Feature
Augmented Point Representation
- Authors: Jiaxu Wang, Ziyi Zhang, Renjing Xu
- Abstract summary: This paper introduces a novel paradigm for the generalizable neural radiance field (NeRF)
We propose the first paradigm that constructs the generalizable neural field based on point-based rather than image-based rendering.
Our approach explicitly models visibilities by geometric priors and augments them with neural features.
- Score: 7.203073346844801
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a novel paradigm for the generalizable neural radiance
field (NeRF). Previous generic NeRF methods combine multiview stereo techniques
with image-based neural rendering for generalization, yielding impressive
results, while suffering from three issues. First, occlusions often result in
inconsistent feature matching. Then, they deliver distortions and artifacts in
geometric discontinuities and locally sharp shapes due to their individual
process of sampled points and rough feature aggregation. Third, their
image-based representations experience severe degradations when source views
are not near enough to the target view. To address challenges, we propose the
first paradigm that constructs the generalizable neural field based on
point-based rather than image-based rendering, which we call the Generalizable
neural Point Field (GPF). Our approach explicitly models visibilities by
geometric priors and augments them with neural features. We propose a novel
nonuniform log sampling strategy to improve both rendering speed and
reconstruction quality. Moreover, we present a learnable kernel spatially
augmented with features for feature aggregations, mitigating distortions at
places with drastically varying geometries. Besides, our representation can be
easily manipulated. Experiments show that our model can deliver better
geometries, view consistencies, and rendering quality than all counterparts and
benchmarks on three datasets in both generalization and finetuning settings,
preliminarily proving the potential of the new paradigm for generalizable NeRF.
Related papers
- Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - Neural Poisson Surface Reconstruction: Resolution-Agnostic Shape
Reconstruction from Point Clouds [53.02191521770926]
We introduce Neural Poisson Surface Reconstruction (nPSR), an architecture for shape reconstruction that addresses the challenge of recovering 3D shapes from points.
nPSR exhibits two main advantages: First, it enables efficient training on low-resolution data while achieving comparable performance at high-resolution evaluation.
Overall, the neural Poisson surface reconstruction not only improves upon the limitations of classical deep neural networks in shape reconstruction but also achieves superior results in terms of reconstruction quality, running time, and resolution agnosticism.
arXiv Detail & Related papers (2023-08-03T13:56:07Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - GARF:Geometry-Aware Generalized Neural Radiance Field [47.76524984421343]
We propose Geometry-Aware Generalized Neural Radiance Field (GARF) with a geometry-aware dynamic sampling (GADS) strategy.
Our framework infers the unseen scenes on both pixel-scale and geometry-scale with only a few input images.
Experiments on indoor and outdoor datasets show that GARF reduces samples by more than 25%, while improving rendering quality and 3D geometry estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - Cascaded and Generalizable Neural Radiance Fields for Fast View
Synthesis [35.035125537722514]
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis.
We first train CG-NeRF on multiple 3D scenes of the DTU dataset.
We show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
arXiv Detail & Related papers (2022-08-09T12:23:48Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse
views [40.7986573030214]
We introduce SparseNeuS, a novel neural rendering based method for the task of surface reconstruction from multi-view images.
SparseNeuS can generalize to new scenes and work well with sparse images (as few as 2 or 3)
arXiv Detail & Related papers (2022-06-12T13:34:03Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.