Forward Flow for Novel View Synthesis of Dynamic Scenes
- URL: http://arxiv.org/abs/2309.17390v1
- Date: Fri, 29 Sep 2023 16:51:06 GMT
- Title: Forward Flow for Novel View Synthesis of Dynamic Scenes
- Authors: Xiang Guo, Jiadai Sun, Yuchao Dai, Guanying Chen, Xiaoqing Ye, Xiao
Tan, Errui Ding, Yumeng Zhang, Jingdong Wang
- Abstract summary: We propose a neural radiance field (NeRF) approach for novel view synthesis of dynamic scenes using forward warping.
Our method outperforms existing methods in both novel view rendering and motion modeling.
- Score: 97.97012116793964
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a neural radiance field (NeRF) approach for novel view
synthesis of dynamic scenes using forward warping. Existing methods often adopt
a static NeRF to represent the canonical space, and render dynamic images at
other time steps by mapping the sampled 3D points back to the canonical space
with the learned backward flow field. However, this backward flow field is
non-smooth and discontinuous, which is difficult to be fitted by commonly used
smooth motion models. To address this problem, we propose to estimate the
forward flow field and directly warp the canonical radiance field to other time
steps. Such forward flow field is smooth and continuous within the object
region, which benefits the motion model learning. To achieve this goal, we
represent the canonical radiance field with voxel grids to enable efficient
forward warping, and propose a differentiable warping process, including an
average splatting operation and an inpaint network, to resolve the many-to-one
and one-to-many mapping issues. Thorough experiments show that our method
outperforms existing methods in both novel view rendering and motion modeling,
demonstrating the effectiveness of our forward flow motion modeling. Project
page: https://npucvr.github.io/ForwardFlowDNeRF
Related papers
- Investigation of moving objects through atmospheric turbulence from a non-stationary platform [0.5735035463793008]
In this work, we extract the optical flow field corresponding to moving objects from an image sequence captured from a moving camera.
Our procedure first computes the optical flow field and creates a motion model to compensate for the flow field induced by camera motion.
All of the sequences and code used in this work are open source and are available by contacting the authors.
arXiv Detail & Related papers (2024-10-29T00:54:28Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - Degrees of Freedom Matter: Inferring Dynamics from Point Trajectories [28.701879490459675]
We aim to learn an implicit motion field parameterized by a neural network to predict the movement of novel points within same domain.
We exploit intrinsic regularization provided by SIREN, and modify the input layer to produce atemporally smooth motion field.
Our experiments assess the model's performance in predicting unseen point trajectories and its application in temporal mesh alignment with deformation.
arXiv Detail & Related papers (2024-06-05T21:02:10Z) - SGD: Street View Synthesis with Gaussian Splatting and Diffusion Prior [53.52396082006044]
Current methods struggle to maintain rendering quality at the viewpoint that deviates significantly from the training viewpoints.
This issue stems from the sparse training views captured by a fixed camera on a moving vehicle.
We propose a novel approach that enhances the capacity of 3DGS by leveraging prior from a Diffusion Model.
arXiv Detail & Related papers (2024-03-29T09:20:29Z) - DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video [18.424138608823267]
We propose DyBluRF, a dynamic radiance field approach that synthesizes sharp novel views from a monocular video affected by motion blur.
To account for motion blur in input images, we simultaneously capture the camera trajectory and object Discrete Cosine Transform (DCT) trajectories within the scene.
arXiv Detail & Related papers (2024-03-15T08:48:37Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields [63.04781030984006]
Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive results in novel view synthesis on 3D dynamic scenes.
We propose OD-NeRF to efficiently train and render dynamic NeRFs on-the-fly which instead is capable of streaming the dynamic scene.
Our algorithm can achieve an interactive speed of 6FPS training and rendering on synthetic dynamic scenes on-the-fly, and a significant speed-up compared to the state-of-the-art on real-world dynamic scenes.
arXiv Detail & Related papers (2023-05-24T07:36:47Z) - Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value
Functions [65.84090965167535]
We present Neural Motion Fields, a novel object representation which encodes both object point clouds and the relative task trajectories as an implicit value function parameterized by a neural network.
This object-centric representation models a continuous distribution over the SE(3) space and allows us to perform grasping reactively by leveraging sampling-based MPC to optimize this value function.
arXiv Detail & Related papers (2022-06-29T18:47:05Z) - DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes [27.37830742693236]
We present DeVRF, a novel representation to accelerate learning dynamic radiance fields.
Experiments demonstrate that DeVRF achieves two orders of magnitude speedup with on-par high-fidelity results.
arXiv Detail & Related papers (2022-05-31T12:13:54Z) - EM-driven unsupervised learning for efficient motion segmentation [3.5232234532568376]
This paper presents a CNN-based fully unsupervised method for motion segmentation from optical flow.
We use the Expectation-Maximization (EM) framework to leverage the loss function and the training procedure of our motion segmentation neural network.
Our method outperforms comparable unsupervised methods and is very efficient.
arXiv Detail & Related papers (2022-01-06T14:35:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.