Instant Continual Learning of Neural Radiance Fields
- URL: http://arxiv.org/abs/2309.01811v2
- Date: Wed, 6 Sep 2023 02:10:37 GMT
- Title: Instant Continual Learning of Neural Radiance Fields
- Authors: Ryan Po, Zhengyang Dong, Alexander W. Bergman, Gordon Wetzstein
- Abstract summary: Neural radiance fields (NeRFs) have emerged as an effective method for novel-view synthesis and 3D scene reconstruction.
We propose a continual learning framework for training NeRFs that leverages replay-based methods combined with a hybrid explicit-implicit scene representation.
Our method outperforms previous methods in reconstruction quality when trained in a continual setting, while having the additional benefit of being an order of magnitude faster.
- Score: 78.08008474313809
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields (NeRFs) have emerged as an effective method for
novel-view synthesis and 3D scene reconstruction. However, conventional
training methods require access to all training views during scene
optimization. This assumption may be prohibitive in continual learning
scenarios, where new data is acquired in a sequential manner and a continuous
update of the NeRF is desired, as in automotive or remote sensing applications.
When naively trained in such a continual setting, traditional scene
representation frameworks suffer from catastrophic forgetting, where previously
learned knowledge is corrupted after training on new data. Prior works in
alleviating forgetting with NeRFs suffer from low reconstruction quality and
high latency, making them impractical for real-world application. We propose a
continual learning framework for training NeRFs that leverages replay-based
methods combined with a hybrid explicit--implicit scene representation. Our
method outperforms previous methods in reconstruction quality when trained in a
continual setting, while having the additional benefit of being an order of
magnitude faster.
Related papers
- FrugalNeRF: Fast Convergence for Few-shot Novel View Synthesis without Learned Priors [6.729663383705042]
We introduce FrugalNeRF, a novel few-shot NeRF framework that leverages weight-sharing voxels across multiple scales to efficiently represent scene details.
Our key contribution is a cross-scale geometric adaptation scheme that selects pseudo ground truth depth based on reprojection errors across scales.
Experiments on LLFF, DTU, and RealEstate-10K show that FrugalNeRF outperforms other few-shot NeRF methods while significantly reducing training time.
arXiv Detail & Related papers (2024-10-21T17:59:53Z) - GeoTransfer : Generalizable Few-Shot Multi-View Reconstruction via Transfer Learning [8.452349885923507]
We present a novel approach for sparse 3D reconstruction by leveraging the expressive power of Neural Radiance Fields (NeRFs)
Our proposed method offers the best of both worlds by transferring the information encoded in NeRF features to derive an accurate occupancy field representation.
We evaluate our approach on the DTU dataset and demonstrate state-of-the-art performance in terms of reconstruction accuracy.
arXiv Detail & Related papers (2024-08-27T01:28:15Z) - SparseCraft: Few-Shot Neural Reconstruction through Stereopsis Guided Geometric Linearization [7.769607568805291]
We present a novel approach for recovering 3D shape and view dependent appearance from a few colored images.
Our method learns an implicit neural representation in the form of a Signed Distance Function (SDF) and a radiance field.
Key to our contribution is a novel implicit neural shape function learning strategy that encourages our SDF field to be as linear as possible near the level-set.
arXiv Detail & Related papers (2024-07-19T12:36:36Z) - Reusable Architecture Growth for Continual Stereo Matching [92.36221737921274]
We introduce a Reusable Architecture Growth (RAG) framework to learn new scenes continually in both supervised and self-supervised manners.
RAG can maintain high reusability during growth by reusing previous units while obtaining good performance.
We also present a Scene Router module to adaptively select the scene-specific architecture path at inference.
arXiv Detail & Related papers (2024-03-30T13:24:58Z) - Self-Evolving Neural Radiance Fields [31.124406548504794]
We propose a novel framework, dubbed Self-Evolving Neural Radiance Fields (SE-NeRF), that applies a self-training framework to neural radiance field (NeRF)
We formulate few-shot NeRF into a teacher-student framework to guide the network to learn a more robust representation of the scene.
We show and evaluate that applying our self-training framework to existing models improves the quality of the rendered images and achieves state-of-the-art performance in multiple settings.
arXiv Detail & Related papers (2023-12-02T02:28:07Z) - OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields [63.04781030984006]
Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive results in novel view synthesis on 3D dynamic scenes.
We propose OD-NeRF to efficiently train and render dynamic NeRFs on-the-fly which instead is capable of streaming the dynamic scene.
Our algorithm can achieve an interactive speed of 6FPS training and rendering on synthetic dynamic scenes on-the-fly, and a significant speed-up compared to the state-of-the-art on real-world dynamic scenes.
arXiv Detail & Related papers (2023-05-24T07:36:47Z) - Effective and Efficient Training for Sequential Recommendation using
Recency Sampling [91.02268704681124]
We propose a novel Recency-based Sampling of Sequences training objective.
We show that the models enhanced with our method can achieve performances exceeding or very close to stateof-the-art BERT4Rec.
arXiv Detail & Related papers (2022-07-06T13:06:31Z) - UNeRF: Time and Memory Conscious U-Shaped Network for Training Neural
Radiance Fields [16.826691448973367]
Neural Radiance Fields (NeRFs) increase reconstruction detail for novel view synthesis and scene reconstruction.
However, the increased resolution and model-free nature of such neural fields come at the cost of high training times and excessive memory requirements.
We propose a method to exploit the redundancy of NeRF's sample-based computations by partially sharing evaluations across neighboring sample points.
arXiv Detail & Related papers (2022-06-23T19:57:07Z) - Neural Maximum A Posteriori Estimation on Unpaired Data for Motion
Deblurring [87.97330195531029]
We propose a Neural Maximum A Posteriori (NeurMAP) estimation framework for training neural networks to recover blind motion information and sharp content from unpaired data.
The proposed NeurMAP is an approach to existing deblurring neural networks, and is the first framework that enables training image deblurring networks on unpaired datasets.
arXiv Detail & Related papers (2022-04-26T08:09:47Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.