CorresNeRF: Image Correspondence Priors for Neural Radiance Fields
- URL: http://arxiv.org/abs/2312.06642v1
- Date: Mon, 11 Dec 2023 18:55:29 GMT
- Title: CorresNeRF: Image Correspondence Priors for Neural Radiance Fields
- Authors: Yixing Lao, Xiaogang Xu, Zhipeng Cai, Xihui Liu, Hengshuang Zhao
- Abstract summary: CorresNeRF is a novel method that leverages image correspondence priors computed by off-the-shelf methods to supervise NeRF training.
We show that this simple yet effective technique of using correspondence priors can be applied as a plug-and-play module across different NeRF variants.
- Score: 45.40164120559542
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRFs) have achieved impressive results in novel view
synthesis and surface reconstruction tasks. However, their performance suffers
under challenging scenarios with sparse input views. We present CorresNeRF, a
novel method that leverages image correspondence priors computed by
off-the-shelf methods to supervise NeRF training. We design adaptive processes
for augmentation and filtering to generate dense and high-quality
correspondences. The correspondences are then used to regularize NeRF training
via the correspondence pixel reprojection and depth loss terms. We evaluate our
methods on novel view synthesis and surface reconstruction tasks with
density-based and SDF-based NeRF models on different datasets. Our method
outperforms previous methods in both photometric and geometric metrics. We show
that this simple yet effective technique of using correspondence priors can be
applied as a plug-and-play module across different NeRF variants. The project
page is at https://yxlao.github.io/corres-nerf.
Related papers
- RS-NeRF: Neural Radiance Fields from Rolling Shutter Images [30.719764073204423]
We present RS-NeRF, a method designed to synthesize normal images from novel views using input with RS distortions.
This involves a physical model that replicates the image formation process under RS conditions.
We further address the inherent shortcomings of the basic RS-NeRF model by delving into the RS characteristics and developing algorithms to enhance its functionality.
arXiv Detail & Related papers (2024-07-14T16:27:11Z) - NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning [63.39461847093663]
We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
arXiv Detail & Related papers (2024-03-02T22:08:10Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - Neural Fields with Thermal Activations for Arbitrary-Scale Super-Resolution [56.089473862929886]
We present a novel way to design neural fields such that points can be queried with an adaptive Gaussian PSF.
With its theoretically guaranteed anti-aliasing, our method sets a new state of the art for arbitrary-scale single image super-resolution.
arXiv Detail & Related papers (2023-11-29T14:01:28Z) - Federated Neural Radiance Fields [36.42289161746808]
We consider training NeRFs in a federated manner, whereby multiple compute nodes, each having acquired a distinct set of observations of the overall scene, learn a common NeRF in parallel.
Our contribution is the first federated learning algorithm for NeRF, which splits the training effort across multiple compute nodes and obviates the need to pool the images at a central node.
A technique based on low-rank decomposition of NeRF layers is introduced to reduce bandwidth consumption to transmit the model parameters for aggregation.
arXiv Detail & Related papers (2023-05-02T02:33:22Z) - GeCoNeRF: Few-shot Neural Radiance Fields via Geometric Consistency [31.22435282922934]
We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometry-aware consistency regularization.
We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models.
arXiv Detail & Related papers (2023-01-26T05:14:12Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - Estimating Neural Reflectance Field from Radiance Field using Tree
Structures [29.431165709718794]
We present a new method for estimating the Neural Reflectance Field (NReF) of an object from a set of posed multi-view images under unknown lighting.
NReF represents 3D geometry and appearance of objects in a disentangled manner, and are hard to be estimated from images only.
Our method solves this problem by exploiting the Neural Radiance Field (NeRF) as a proxy representation, from which we perform further decomposition.
arXiv Detail & Related papers (2022-10-09T10:21:31Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.