Neural Trajectory Fields for Dynamic Novel View Synthesis
- URL: http://arxiv.org/abs/2105.05994v1
- Date: Wed, 12 May 2021 22:38:30 GMT
- Title: Neural Trajectory Fields for Dynamic Novel View Synthesis
- Authors: Chaoyang Wang, Ben Eckart, Simon Lucey, Orazio Gallo
- Abstract summary: We introduce DCT-NeRF, a coordinatebased neural representation for dynamic scenes.
We learn smooth and stable trajectories over the input sequence for each point in space.
This allows us to enforce consistency between any two frames in the sequence, which results in high quality reconstruction.
- Score: 40.9665251865609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent approaches to render photorealistic views from a limited set of
photographs have pushed the boundaries of our interactions with pictures of
static scenes. The ability to recreate moments, that is, time-varying
sequences, is perhaps an even more interesting scenario, but it remains largely
unsolved. We introduce DCT-NeRF, a coordinatebased neural representation for
dynamic scenes. DCTNeRF learns smooth and stable trajectories over the input
sequence for each point in space. This allows us to enforce consistency between
any two frames in the sequence, which results in high quality reconstruction,
particularly in dynamic regions.
Related papers
- Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video [18.424138608823267]
We propose DyBluRF, a dynamic radiance field approach that synthesizes sharp novel views from a monocular video affected by motion blur.
To account for motion blur in input images, we simultaneously capture the camera trajectory and object Discrete Cosine Transform (DCT) trajectories within the scene.
arXiv Detail & Related papers (2024-03-15T08:48:37Z) - NeVRF: Neural Video-based Radiance Fields for Long-duration Sequences [53.8501224122952]
We propose a novel neural video-based radiance fields (NeVRF) representation.
NeVRF marries neural radiance field with image-based rendering to support photo-realistic novel view synthesis on long-duration dynamic inward-looking scenes.
Our experiments demonstrate the effectiveness of NeVRF in enabling long-duration sequence rendering, sequential data reconstruction, and compact data storage.
arXiv Detail & Related papers (2023-12-10T11:14:30Z) - DynMF: Neural Motion Factorization for Real-time Dynamic View Synthesis
with 3D Gaussian Splatting [35.69069478773709]
We argue that the per-point motions of a dynamic scene can be decomposed into a small set of explicit or learned trajectories.
Our representation is interpretable, efficient, and expressive enough to offer real-time view synthesis of complex dynamic scene motions.
arXiv Detail & Related papers (2023-11-30T18:59:11Z) - BLiRF: Bandlimited Radiance Fields for Dynamic Scene Modeling [43.246536947828844]
We propose a framework that factorizes time and space by formulating a scene as a composition of bandlimited, high-dimensional signals.
We demonstrate compelling results across complex dynamic scenes that involve changes in lighting, texture and long-range dynamics.
arXiv Detail & Related papers (2023-02-27T06:40:32Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - Dynamic Scene Novel View Synthesis via Deferred Spatio-temporal
Consistency [18.036582072609882]
Structures (SfM) and novel view synthesis (NVS) are presented.
SfM produces noisy-temporally reconstructed sparse clouds, resulting in NVS with temporally inconsistent effects.
We demonstrate our algorithm on real-world dynamic scenes against classic more recent learning-based baseline approaches.
arXiv Detail & Related papers (2021-09-02T15:29:45Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z) - Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes [70.76742458931935]
We introduce a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion.
Our representation is optimized through a neural network to fit the observed input views.
We show that our representation can be used for complex dynamic scenes, including thin structures, view-dependent effects, and natural degrees of motion.
arXiv Detail & Related papers (2020-11-26T01:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.