TimeNeRF: Building Generalizable Neural Radiance Fields across Time from Few-Shot Input Views
- URL: http://arxiv.org/abs/2507.13929v1
- Date: Fri, 18 Jul 2025 14:07:02 GMT
- Title: TimeNeRF: Building Generalizable Neural Radiance Fields across Time from Few-Shot Input Views
- Authors: Hsiang-Hui Hung, Huu-Phu Do, Yung-Hui Li, Ching-Chun Huang,
- Abstract summary: TimeNeRF is a generalizable neural rendering approach for rendering novel views at arbitrary viewpoints.<n>We show that TimeNeRF can render novel views in a few-shot setting without per-scene optimization.<n>It excels in creating realistic novel views that transition smoothly across different times.
- Score: 6.319765967588987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present TimeNeRF, a generalizable neural rendering approach for rendering novel views at arbitrary viewpoints and at arbitrary times, even with few input views. For real-world applications, it is expensive to collect multiple views and inefficient to re-optimize for unseen scenes. Moreover, as the digital realm, particularly the metaverse, strives for increasingly immersive experiences, the ability to model 3D environments that naturally transition between day and night becomes paramount. While current techniques based on Neural Radiance Fields (NeRF) have shown remarkable proficiency in synthesizing novel views, the exploration of NeRF's potential for temporal 3D scene modeling remains limited, with no dedicated datasets available for this purpose. To this end, our approach harnesses the strengths of multi-view stereo, neural radiance fields, and disentanglement strategies across diverse datasets. This equips our model with the capability for generalizability in a few-shot setting, allows us to construct an implicit content radiance field for scene representation, and further enables the building of neural radiance fields at any arbitrary time. Finally, we synthesize novel views of that time via volume rendering. Experiments show that TimeNeRF can render novel views in a few-shot setting without per-scene optimization. Most notably, it excels in creating realistic novel views that transition smoothly across different times, adeptly capturing intricate natural scene changes from dawn to dusk.
Related papers
- Incremental Multi-Scene Modeling via Continual Neural Graphics Primitives [17.411855207380256]
We introduce Continual-Neural Graphics Primitives (C-NGP), a novel continual learning framework that integrates multiple scenes incrementally into a single neural radiance field.<n>C-NGP adapts to new scenes without requiring access to old data.<n>We demonstrate that C-NGP can accommodate multiple scenes without increasing the parameter count, producing high-quality novel-view renderings on synthetic and real datasets.
arXiv Detail & Related papers (2024-11-29T18:05:16Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - NeVRF: Neural Video-based Radiance Fields for Long-duration Sequences [53.8501224122952]
We propose a novel neural video-based radiance fields (NeVRF) representation.
NeVRF marries neural radiance field with image-based rendering to support photo-realistic novel view synthesis on long-duration dynamic inward-looking scenes.
Our experiments demonstrate the effectiveness of NeVRF in enabling long-duration sequence rendering, sequential data reconstruction, and compact data storage.
arXiv Detail & Related papers (2023-12-10T11:14:30Z) - Strata-NeRF : Neural Radiance Fields for Stratified Scenes [29.58305675148781]
In the real world, we may capture a scene at multiple levels, resulting in a layered capture.
We propose Strata-NeRF, a single neural radiance field that implicitly captures a scene with multiple levels.
We find that Strata-NeRF effectively captures stratified scenes, minimizes artifacts, and synthesizes high-fidelity views.
arXiv Detail & Related papers (2023-08-20T18:45:43Z) - Template-free Articulated Neural Point Clouds for Reposable View
Synthesis [11.535440791891217]
We present a novel method to jointly learn a Dynamic NeRF and an associated skeletal model from even sparse multi-view video.
Our forward-warping approach achieves state-of-the-art visual fidelity when synthesizing novel views and poses.
arXiv Detail & Related papers (2023-05-30T14:28:08Z) - Neural Radiance Fields (NeRFs): A Review and Some Recent Developments [0.0]
Neural Radiance Field (NeRF) is a framework that represents a 3D scene in the weights of a fully connected neural network.
NeRFs have become a popular field of research as recent developments have been made that expand the performance and capabilities of the base framework.
arXiv Detail & Related papers (2023-04-30T03:23:58Z) - Cascaded and Generalizable Neural Radiance Fields for Fast View
Synthesis [35.035125537722514]
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis.
We first train CG-NeRF on multiple 3D scenes of the DTU dataset.
We show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
arXiv Detail & Related papers (2022-08-09T12:23:48Z) - Unsupervised Discovery and Composition of Object Light Fields [57.198174741004095]
We propose to represent objects in an object-centric, compositional scene representation as light fields.
We propose a novel light field compositor module that enables reconstructing the global light field from a set of object-centric light fields.
arXiv Detail & Related papers (2022-05-08T17:50:35Z) - BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale
Scene Rendering [145.95688637309746]
We introduce BungeeNeRF, a progressive neural radiance field that achieves level-of-detail rendering across drastically varied scales.
We demonstrate the superiority of BungeeNeRF in modeling diverse multi-scale scenes with drastically varying views on multiple data sources.
arXiv Detail & Related papers (2021-12-10T13:16:21Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.