GeCoNeRF: Few-shot Neural Radiance Fields via Geometric Consistency
- URL: http://arxiv.org/abs/2301.10941v3
- Date: Thu, 27 Apr 2023 05:34:01 GMT
- Title: GeCoNeRF: Few-shot Neural Radiance Fields via Geometric Consistency
- Authors: Min-seop Kwak, Jiuhn Song, Seungryong Kim
- Abstract summary: We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometry-aware consistency regularization.
We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models.
- Score: 31.22435282922934
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a novel framework to regularize Neural Radiance Field (NeRF) in a
few-shot setting with a geometry-aware consistency regularization. The proposed
approach leverages a rendered depth map at unobserved viewpoint to warp sparse
input images to the unobserved viewpoint and impose them as pseudo ground
truths to facilitate learning of NeRF. By encouraging such geometry-aware
consistency at a feature-level instead of using pixel-level reconstruction
loss, we regularize the NeRF at semantic and structural levels while allowing
for modeling view dependent radiance to account for color variations across
viewpoints. We also propose an effective method to filter out erroneous warped
solutions, along with training strategies to stabilize training during
optimization. We show that our model achieves competitive results compared to
state-of-the-art few-shot NeRF models. Project page is available at
https://ku-cvlab.github.io/GeCoNeRF/.
Related papers
- Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - $R^2$-Mesh: Reinforcement Learning Powered Mesh Reconstruction via Geometry and Appearance Refinement [5.810659946867557]
Mesh reconstruction based on Neural Radiance Fields (NeRF) is popular in a variety of applications such as computer graphics, virtual reality, and medical imaging.
We propose a novel algorithm that progressively generates and optimize meshes from multi-view images.
Our method delivers highly competitive and robust performance in both mesh rendering quality and geometric quality.
arXiv Detail & Related papers (2024-08-19T16:33:17Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Taming Latent Diffusion Model for Neural Radiance Field Inpainting [63.297262813285265]
Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images.
We propose tempering the diffusion model'sity with per-scene customization and mitigating the textural shift with masked training.
Our framework yields state-of-the-art NeRF inpainting results on various real-world scenes.
arXiv Detail & Related papers (2024-04-15T17:59:57Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - CorresNeRF: Image Correspondence Priors for Neural Radiance Fields [45.40164120559542]
CorresNeRF is a novel method that leverages image correspondence priors computed by off-the-shelf methods to supervise NeRF training.
We show that this simple yet effective technique of using correspondence priors can be applied as a plug-and-play module across different NeRF variants.
arXiv Detail & Related papers (2023-12-11T18:55:29Z) - Self-Evolving Neural Radiance Fields [31.124406548504794]
We propose a novel framework, dubbed Self-Evolving Neural Radiance Fields (SE-NeRF), that applies a self-training framework to neural radiance field (NeRF)
We formulate few-shot NeRF into a teacher-student framework to guide the network to learn a more robust representation of the scene.
We show and evaluate that applying our self-training framework to existing models improves the quality of the rendered images and achieves state-of-the-art performance in multiple settings.
arXiv Detail & Related papers (2023-12-02T02:28:07Z) - NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry
Scaffolds [60.1382112938132]
We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room.
NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views.
arXiv Detail & Related papers (2023-04-13T06:40:08Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - VM-NeRF: Tackling Sparsity in NeRF with View Morphing [19.418298933260953]
NeRF aims to learn a continuous neural scene representation by using a finite set of input images taken from various viewpoints.
This paper introduces a novel method to generate geometrically consistent image transitions between viewpoints using View Morphing.
arXiv Detail & Related papers (2022-10-09T09:59:46Z) - Nerfies: Deformable Neural Radiance Fields [44.923025540903886]
We present the first method capable of photorealistically reconstructing deformable scenes using photos/videos captured casually from mobile phones.
Our approach augments neural radiance fields (NeRF) by optimizing an additional continuous volumetric deformation field that warps each observed point into a canonical 5D NeRF.
We show that our method faithfully reconstructs non-rigidly deforming scenes and reproduces unseen views with high fidelity.
arXiv Detail & Related papers (2020-11-25T18:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.