Point-DynRF: Point-based Dynamic Radiance Fields from a Monocular Video
- URL: http://arxiv.org/abs/2310.09647v2
- Date: Tue, 24 Oct 2023 11:03:31 GMT
- Title: Point-DynRF: Point-based Dynamic Radiance Fields from a Monocular Video
- Authors: Byeongjun Park, Changick Kim
- Abstract summary: We introduce point-based dynamic radiance fields, where the global geometric information and volume rendering process are trained by neural point clouds and dynamic radiance fields, respectively.
Specifically, we reconstruct neural point clouds directly from geometric proxies and optimize both radiance fields and the geometric proxies using our proposed losses.
We validate the effectiveness of our method with experiments on the NVIDIA Dynamic Scenes dataset and several causally captured monocular video clips.
- Score: 19.0733297053322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic radiance fields have emerged as a promising approach for generating
novel views from a monocular video. However, previous methods enforce the
geometric consistency to dynamic radiance fields only between adjacent input
frames, making it difficult to represent the global scene geometry and
degenerates at the viewpoint that is spatio-temporally distant from the input
camera trajectory. To solve this problem, we introduce point-based dynamic
radiance fields (\textbf{Point-DynRF}), a novel framework where the global
geometric information and the volume rendering process are trained by neural
point clouds and dynamic radiance fields, respectively. Specifically, we
reconstruct neural point clouds directly from geometric proxies and optimize
both radiance fields and the geometric proxies using our proposed losses,
allowing them to complement each other. We validate the effectiveness of our
method with experiments on the NVIDIA Dynamic Scenes Dataset and several
causally captured monocular video clips.
Related papers
- D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - OmniLocalRF: Omnidirectional Local Radiance Fields from Dynamic Videos [14.965321452764355]
We introduce a new approach called Omnidirectional Local Radiance Fields (OmniLocalRF) that can render static-only scene views.
Our approach combines the principles of local radiance fields with the bidirectional optimization of omnidirectional rays.
Our experiments validate that OmniLocalRF outperforms existing methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2024-03-31T12:55:05Z) - DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video [18.424138608823267]
We propose DyBluRF, a dynamic radiance field approach that synthesizes sharp novel views from a monocular video affected by motion blur.
To account for motion blur in input images, we simultaneously capture the camera trajectory and object Discrete Cosine Transform (DCT) trajectories within the scene.
arXiv Detail & Related papers (2024-03-15T08:48:37Z) - NeVRF: Neural Video-based Radiance Fields for Long-duration Sequences [53.8501224122952]
We propose a novel neural video-based radiance fields (NeVRF) representation.
NeVRF marries neural radiance field with image-based rendering to support photo-realistic novel view synthesis on long-duration dynamic inward-looking scenes.
Our experiments demonstrate the effectiveness of NeVRF in enabling long-duration sequence rendering, sequential data reconstruction, and compact data storage.
arXiv Detail & Related papers (2023-12-10T11:14:30Z) - Geo-NI: Geometry-aware Neural Interpolation for Light Field Rendering [57.775678643512435]
We present a Geometry-aware Neural Interpolation (Geo-NI) framework for light field rendering.
By combining the superiorities of NI and DIBR, the proposed Geo-NI is able to render views with large disparity.
arXiv Detail & Related papers (2022-06-20T12:25:34Z) - PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for
Single-Image Novel View Synthesis [52.546998369121354]
We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images.
We propose to incorporate explicit geometry reasoning and combine it with pixel-aligned features for radiance field prediction.
We show that the introduction of such geometry-aware features helps to achieve a better disentanglement between appearance and geometry.
arXiv Detail & Related papers (2022-02-10T07:39:47Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - GeoNeRF: Generalizing NeRF with Geometry Priors [2.578242050187029]
We present GeoNeRF, a generalizable photorealistic novel view method based on neural radiance fields.
Our approach consists of two main stages: a geometry reasoner and a synthesis.
Experiments show that GeoNeRF outperforms state-of-the-art generalizable neural rendering models on various synthetic and real datasets.
arXiv Detail & Related papers (2021-11-26T15:15:37Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.