Template-free Articulated Neural Point Clouds for Reposable View
Synthesis
- URL: http://arxiv.org/abs/2305.19065v2
- Date: Tue, 31 Oct 2023 17:20:57 GMT
- Title: Template-free Articulated Neural Point Clouds for Reposable View
Synthesis
- Authors: Lukas Uzolas, Elmar Eisemann, Petr Kellnhofer
- Abstract summary: We present a novel method to jointly learn a Dynamic NeRF and an associated skeletal model from even sparse multi-view video.
Our forward-warping approach achieves state-of-the-art visual fidelity when synthesizing novel views and poses.
- Score: 11.535440791891217
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic Neural Radiance Fields (NeRFs) achieve remarkable visual quality when
synthesizing novel views of time-evolving 3D scenes. However, the common
reliance on backward deformation fields makes reanimation of the captured
object poses challenging. Moreover, the state of the art dynamic models are
often limited by low visual fidelity, long reconstruction time or specificity
to narrow application domains. In this paper, we present a novel method
utilizing a point-based representation and Linear Blend Skinning (LBS) to
jointly learn a Dynamic NeRF and an associated skeletal model from even sparse
multi-view video. Our forward-warping approach achieves state-of-the-art visual
fidelity when synthesizing novel views and poses while significantly reducing
the necessary learning time when compared to existing work. We demonstrate the
versatility of our representation on a variety of articulated objects from
common datasets and obtain reposable 3D reconstructions without the need of
object-specific skeletal templates. Code will be made available at
https://github.com/lukasuz/Articulated-Point-NeRF.
Related papers
- GAS: Generative Avatar Synthesis from a Single Image [54.95198111659466]
We introduce a generalizable and unified framework to synthesize view-consistent and temporally coherent avatars from a single image.
Our approach bridges this gap by combining the reconstruction power of regression-based 3D human reconstruction with the generative capabilities of a diffusion model.
arXiv Detail & Related papers (2025-02-10T19:00:39Z) - Template-free Articulated Gaussian Splatting for Real-time Reposable Dynamic View Synthesis [21.444265403717015]
We propose a novel approach to automatically discover the associated skeleton model for dynamic objects from videos without the need for object-specific templates.
Treating superpoints as rigid parts, we can discover the underlying skeleton model through intuitive cues and optimize it using the kinematic model.
Experiments demonstrate the effectiveness and efficiency of our method in obtaining re-posable 3D objects.
arXiv Detail & Related papers (2024-12-07T07:35:09Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - REACTO: Reconstructing Articulated Objects from a Single Video [64.89760223391573]
We propose a novel deformation model that enhances the rigidity of each part while maintaining flexible deformation of the joints.
Our method outperforms previous works in producing higher-fidelity 3D reconstructions of general articulated objects.
arXiv Detail & Related papers (2024-04-17T08:01:55Z) - Knowledge NeRF: Few-shot Novel View Synthesis for Dynamic Articulated Objects [8.981452149411714]
We present Knowledge NeRF to synthesize novel views for dynamic scenes.
We pretrain a NeRF model for an articulated object.When articulated objects moves, Knowledge NeRF learns to generate novel views at the new state.
arXiv Detail & Related papers (2024-03-31T12:45:23Z) - Animating NeRFs from Texture Space: A Framework for Pose-Dependent
Rendering of Human Performances [11.604386285817302]
We introduce a novel NeRF-based framework for pose-dependent rendering of human performances.
Our approach results in high-quality renderings for novel-view and novel-pose synthesis.
arXiv Detail & Related papers (2023-11-06T14:34:36Z) - Neural Radiance Fields (NeRFs): A Review and Some Recent Developments [0.0]
Neural Radiance Field (NeRF) is a framework that represents a 3D scene in the weights of a fully connected neural network.
NeRFs have become a popular field of research as recent developments have been made that expand the performance and capabilities of the base framework.
arXiv Detail & Related papers (2023-04-30T03:23:58Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - Neural Rendering of Humans in Novel View and Pose from Monocular Video [68.37767099240236]
We introduce a new method that generates photo-realistic humans under novel views and poses given a monocular video as input.
Our method significantly outperforms existing approaches under unseen poses and novel views given monocular videos as input.
arXiv Detail & Related papers (2022-04-04T03:09:20Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.