Anisotropic Neural Representation Learning for High-Quality Neural
Rendering
- URL: http://arxiv.org/abs/2311.18311v2
- Date: Mon, 11 Mar 2024 02:22:50 GMT
- Title: Anisotropic Neural Representation Learning for High-Quality Neural
Rendering
- Authors: Y.Wang, J. Xu, Y. Zeng and Y. Gong
- Abstract summary: We propose an anisotropic neural representation learning method that utilizes learnable view-dependent features to improve scene representation and reconstruction.
Our method is flexiable and can be plugged into NeRF-based frameworks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural radiance fields (NeRFs) have achieved impressive view synthesis
results by learning an implicit volumetric representation from multi-view
images. To project the implicit representation into an image, NeRF employs
volume rendering that approximates the continuous integrals of rays as an
accumulation of the colors and densities of the sampled points. Although this
approximation enables efficient rendering, it ignores the direction information
in point intervals, resulting in ambiguous features and limited reconstruction
quality. In this paper, we propose an anisotropic neural representation
learning method that utilizes learnable view-dependent features to improve
scene representation and reconstruction. We model the volumetric function as
spherical harmonic (SH)-guided anisotropic features, parameterized by
multilayer perceptrons, facilitating ambiguity elimination while preserving the
rendering efficiency. To achieve robust scene reconstruction without anisotropy
overfitting, we regularize the energy of the anisotropic features during
training. Our method is flexiable and can be plugged into NeRF-based
frameworks. Extensive experiments show that the proposed representation can
boost the rendering quality of various NeRFs and achieve state-of-the-art
rendering performance on both synthetic and real-world scenes.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - NePF: Neural Photon Field for Single-Stage Inverse Rendering [6.977356702921476]
We present a novel single-stage framework, Neural Photon Field (NePF), to address the ill-posed inverse rendering from multi-view images.
NePF achieves this unification by fully utilizing the physical implication behind the weight function of neural implicit surfaces.
We evaluate our method on both real and synthetic datasets.
arXiv Detail & Related papers (2023-11-20T06:15:46Z) - FPO++: Efficient Encoding and Rendering of Dynamic Neural Radiance Fields by Analyzing and Enhancing Fourier PlenOctrees [3.5884936187733403]
Fourier PlenOctrees have shown to be an efficient representation for real-time rendering of dynamic Neural Radiance Fields (NeRF)
In this paper, we perform an in-depth analysis of these artifacts and leverage the resulting insights to propose an improved representation.
arXiv Detail & Related papers (2023-10-31T17:59:58Z) - NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering [23.482941494283978]
This paper presents a method, namely NeuS-PIR, for recovering relightable neural surfaces from multi-view images or video.
Unlike methods based on NeRF and discrete meshes, our method utilizes implicit neural surface representation to reconstruct high-quality geometry.
Our method enables advanced applications such as relighting, which can be seamlessly integrated with modern graphics engines.
arXiv Detail & Related papers (2023-06-13T09:02:57Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable
Novel View Synthesis [90.03590032170169]
We present intrinsic neural radiance fields, dubbed IntrinsicNeRF, which introduce intrinsic decomposition into the NeRF-based neural rendering method.
Our experiments and editing samples on both object-specific/room-scale scenes and synthetic/real-word data demonstrate that we can obtain consistent intrinsic decomposition results.
arXiv Detail & Related papers (2022-10-02T22:45:11Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance
Fields [40.72851892972173]
We introduce Ref-NeRF, which replaces NeRF's parameterization of view-dependent outgoing radiance with a representation of reflected radiance and structures.
We show that our model's internal representation of outgoing radiance is interpretable and useful for scene editing.
arXiv Detail & Related papers (2021-12-07T18:58:37Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.