InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules
- URL: http://arxiv.org/abs/2308.13897v2
- Date: Sun, 24 Mar 2024 05:20:15 GMT
- Title: InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules
- Authors: Yanqi Bao, Tianyu Ding, Jing Huo, Wenbin Li, Yuxin Li, Yang Gao,
- Abstract summary: Generalizing Neural Radiance Fields (NeRF) to new scenes is a significant challenge.
We introduce InsertNeRF, a method for INStilling gEneRalizabiliTy into NeRF.
- Score: 23.340064406356174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalizing Neural Radiance Fields (NeRF) to new scenes is a significant challenge that existing approaches struggle to address without extensive modifications to vanilla NeRF framework. We introduce InsertNeRF, a method for INStilling gEneRalizabiliTy into NeRF. By utilizing multiple plug-and-play HyperNet modules, InsertNeRF dynamically tailors NeRF's weights to specific reference scenes, transforming multi-scale sampling-aware features into scene-specific representations. This novel design allows for more accurate and efficient representations of complex appearances and geometries. Experiments show that this method not only achieves superior generalization performance but also provides a flexible pathway for integration with other NeRF-like systems, even in sparse input settings. Code will be available https://github.com/bbbbby-99/InsertNeRF.
Related papers
- Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - Fast Sparse View Guided NeRF Update for Object Reconfigurations [42.947608325321475]
We develop the first update method for NeRFs to physical changes.
Our method takes only sparse new images as extra inputs and update the pre-trained NeRF in around 1 to 2 minutes.
Our core idea is the use of a second helper NeRF to learn the local geometry and appearance changes.
arXiv Detail & Related papers (2024-03-16T22:00:16Z) - HyperFields: Towards Zero-Shot Generation of NeRFs from Text [30.223443632782]
We introduce HyperFields, a method for generating text-conditioned Neural Radiance Fields (NeRFs) with a single forward pass.
Key to our approach are: (i) a dynamic hypernetwork, which learns a smooth mapping from text token embeddings to the space of NeRFs; (ii) NeRF distillation training, which distills scenes encoded in individual NeRFs into one dynamic hypernetwork.
arXiv Detail & Related papers (2023-10-26T00:36:03Z) - Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer
with Mixture-of-View-Experts [88.23732496104667]
Cross-scene generalizable NeRF models have become a new spotlight of the NeRF field.
We bridge "neuralized" architectures with the powerful Mixture-of-Experts (MoE) idea from large language models.
Our proposed model, dubbed GNT with Mixture-of-View-Experts (GNT-MOVE), has experimentally shown state-of-the-art results when transferring to unseen scenes.
arXiv Detail & Related papers (2023-08-22T21:18:54Z) - DReg-NeRF: Deep Registration for Neural Radiance Fields [66.69049158826677]
We propose DReg-NeRF to solve the NeRF registration problem on object-centric annotated scenes without human intervention.
Our proposed method beats the SOTA point cloud registration methods by a large margin.
arXiv Detail & Related papers (2023-08-18T08:37:49Z) - A General Implicit Framework for Fast NeRF Composition and Rendering [40.07666955244417]
We propose a general implicit pipeline for composing NeRF objects quickly.
Our work introduces a new surface representation known as Neural Depth Fields (NeDF)
It leverages an intersection neural network to query NeRF for acceleration instead of depending on an explicit spatial structure.
arXiv Detail & Related papers (2023-08-09T02:27:23Z) - Learning a Diffusion Prior for NeRFs [84.99454404653339]
We propose to use a diffusion model to generate NeRFs encoded on a regularized grid.
We show that our model can sample realistic NeRFs, while at the same time allowing conditional generations, given a certain observation as guidance.
arXiv Detail & Related papers (2023-04-27T19:24:21Z) - FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency
Regularization [32.1581416980828]
We present Frequency regularized NeRF (FreeNeRF), a surprisingly simple baseline that outperforms previous methods.
We analyze the key challenges in few-shot neural rendering and find that frequency plays an important role in NeRF's training.
arXiv Detail & Related papers (2023-03-13T18:59:03Z) - NeRF-RPN: A general framework for object detection in NeRFs [54.54613914831599]
NeRF-RPN aims to detect all bounding boxes of objects in a scene.
NeRF-RPN is a general framework and can be applied to detect objects without class labels.
arXiv Detail & Related papers (2022-11-21T17:02:01Z) - SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single
Image [85.43496313628943]
We present a Single View NeRF (SinNeRF) framework consisting of thoughtfully designed semantic and geometry regularizations.
SinNeRF constructs a semi-supervised learning process, where we introduce and propagate geometry pseudo labels.
Experiments are conducted on complex scene benchmarks, including NeRF synthetic dataset, Local Light Field Fusion dataset, and DTU dataset.
arXiv Detail & Related papers (2022-04-02T19:32:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.