Advances in Radiance Field for Dynamic Scene: From Neural Field to Gaussian Field
- URL: http://arxiv.org/abs/2505.10049v2
- Date: Thu, 22 May 2025 02:33:32 GMT
- Title: Advances in Radiance Field for Dynamic Scene: From Neural Field to Gaussian Field
- Authors: Jinlong Fan, Xuepu Zeng, Jing Zhang, Mingming Gong, Yuxiang Yang, Dacheng Tao,
- Abstract summary: This survey presents a systematic analysis of over 200 papers focused on dynamic scene representation using radiance field.<n>We organize diverse methodological approaches under a unified representational framework, concluding with a critical examination of persistent challenges and promising research directions.
- Score: 85.12359852781216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic scene representation and reconstruction have undergone transformative advances in recent years, catalyzed by breakthroughs in neural radiance fields and 3D Gaussian splatting techniques. While initially developed for static environments, these methodologies have rapidly evolved to address the complexities inherent in 4D dynamic scenes through an expansive body of research. Coupled with innovations in differentiable volumetric rendering, these approaches have significantly enhanced the quality of motion representation and dynamic scene reconstruction, thereby garnering substantial attention from the computer vision and graphics communities. This survey presents a systematic analysis of over 200 papers focused on dynamic scene representation using radiance field, spanning the spectrum from implicit neural representations to explicit Gaussian primitives. We categorize and evaluate these works through multiple critical lenses: motion representation paradigms, reconstruction techniques for varied scene dynamics, auxiliary information integration strategies, and regularization approaches that ensure temporal consistency and physical plausibility. We organize diverse methodological approaches under a unified representational framework, concluding with a critical examination of persistent challenges and promising research directions. By providing this comprehensive overview, we aim to establish a definitive reference for researchers entering this rapidly evolving field while offering experienced practitioners a systematic understanding of both conceptual principles and practical frontiers in dynamic scene reconstruction.
Related papers
- Motion Generation: A Survey of Generative Approaches and Benchmarks [1.4254358932994455]
We provide an in-depth categorization of motion generation methods based on their underlying generative strategies.<n>Our main focus is on papers published in top-tier venues since 2023, reflecting the most recent advancements in the field.<n>We analyze architectural principles, conditioning mechanisms, and generation settings, and compile a detailed overview of the evaluation metrics and datasets used across the literature.
arXiv Detail & Related papers (2025-07-07T19:04:56Z) - Dynamic Scene Reconstruction: Recent Advance in Real-time Rendering and Streaming [7.250878248686215]
Representing and rendering dynamic scenes from 2D images is a fundamental yet challenging problem in computer vision and graphics.<n>This survey provides a comprehensive review of the evolution and advancements in dynamic scene representation and rendering.<n>We systematically summarize existing approaches, categorize them according to their core principles, compile relevant datasets, compare the performance of various methods on these benchmarks, and explore the challenges and future research directions in this rapidly evolving field.
arXiv Detail & Related papers (2025-03-11T08:29:41Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - Eye Motion Matters for 3D Face Reconstruction [13.633246294557765]
We introduce an Eye Landmark Adjustment Module, complemented by a Local Dynamic Loss, to capture the dynamic features of the eyes area.
Our module allows for flexible adjustment of landmarks, resulting in accurate recreation of various eye states.
arXiv Detail & Related papers (2024-01-18T01:47:55Z) - Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - Single-shot Tomography of Discrete Dynamic Objects [1.1407697960152927]
We present a novel method for the reconstruction of high-resolution temporal images in dynamic tomographic imaging.
The implications of this research extend to improved visualization and analysis of dynamic processes in tomographic imaging.
arXiv Detail & Related papers (2023-11-09T10:52:02Z) - DynaVol: Unsupervised Learning for Dynamic Scenes through Object-Centric
Voxelization [67.85434518679382]
We present DynaVol, a 3D scene generative model that unifies geometric structures and object-centric learning.
The key idea is to perform object-centric voxelization to capture the 3D nature of the scene.
voxel features evolve over time through a canonical-space deformation function, forming the basis for global representation learning.
arXiv Detail & Related papers (2023-04-30T05:29:28Z) - Neural Fields in Visual Computing and Beyond [54.950885364735804]
Recent advances in machine learning have created increasing interest in solving visual computing problems using coordinate-based neural networks.
neural fields have seen successful application in the synthesis of 3D shapes and image, animation of human bodies, 3D reconstruction, and pose estimation.
This report provides context, mathematical grounding, and an extensive review of literature on neural fields.
arXiv Detail & Related papers (2021-11-22T18:57:51Z) - Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes [70.76742458931935]
We introduce a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion.
Our representation is optimized through a neural network to fit the observed input views.
We show that our representation can be used for complex dynamic scenes, including thin structures, view-dependent effects, and natural degrees of motion.
arXiv Detail & Related papers (2020-11-26T01:23:44Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.