Template-free Articulated Neural Point Clouds for Reposable View
Synthesis
- URL: http://arxiv.org/abs/2305.19065v2
- Date: Tue, 31 Oct 2023 17:20:57 GMT
- Title: Template-free Articulated Neural Point Clouds for Reposable View
Synthesis
- Authors: Lukas Uzolas, Elmar Eisemann, Petr Kellnhofer
- Abstract summary: We present a novel method to jointly learn a Dynamic NeRF and an associated skeletal model from even sparse multi-view video.
Our forward-warping approach achieves state-of-the-art visual fidelity when synthesizing novel views and poses.
- Score: 11.535440791891217
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic Neural Radiance Fields (NeRFs) achieve remarkable visual quality when
synthesizing novel views of time-evolving 3D scenes. However, the common
reliance on backward deformation fields makes reanimation of the captured
object poses challenging. Moreover, the state of the art dynamic models are
often limited by low visual fidelity, long reconstruction time or specificity
to narrow application domains. In this paper, we present a novel method
utilizing a point-based representation and Linear Blend Skinning (LBS) to
jointly learn a Dynamic NeRF and an associated skeletal model from even sparse
multi-view video. Our forward-warping approach achieves state-of-the-art visual
fidelity when synthesizing novel views and poses while significantly reducing
the necessary learning time when compared to existing work. We demonstrate the
versatility of our representation on a variety of articulated objects from
common datasets and obtain reposable 3D reconstructions without the need of
object-specific skeletal templates. Code will be made available at
https://github.com/lukasuz/Articulated-Point-NeRF.
Related papers
- SCARF: Scalable Continual Learning Framework for Memory-efficient Multiple Neural Radiance Fields [9.606992888590757]
We build on Neural Radiance Fields (NeRF), which uses multi-layer perceptron to model the density and radiance field of a scene as the implicit function.
We propose an uncertain surface knowledge distillation strategy to transfer the radiance field knowledge of previous scenes to the new model.
Experiments show that the proposed approach achieves state-of-the-art rendering quality of continual learning NeRF on NeRF-Synthetic, LLFF, and TanksAndTemples datasets.
arXiv Detail & Related papers (2024-09-06T03:36:12Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - REACTO: Reconstructing Articulated Objects from a Single Video [64.89760223391573]
We propose a novel deformation model that enhances the rigidity of each part while maintaining flexible deformation of the joints.
Our method outperforms previous works in producing higher-fidelity 3D reconstructions of general articulated objects.
arXiv Detail & Related papers (2024-04-17T08:01:55Z) - Knowledge NeRF: Few-shot Novel View Synthesis for Dynamic Articulated Objects [8.981452149411714]
We present Knowledge NeRF to synthesize novel views for dynamic scenes.
We pretrain a NeRF model for an articulated object.When articulated objects moves, Knowledge NeRF learns to generate novel views at the new state.
arXiv Detail & Related papers (2024-03-31T12:45:23Z) - Animating NeRFs from Texture Space: A Framework for Pose-Dependent
Rendering of Human Performances [11.604386285817302]
We introduce a novel NeRF-based framework for pose-dependent rendering of human performances.
Our approach results in high-quality renderings for novel-view and novel-pose synthesis.
arXiv Detail & Related papers (2023-11-06T14:34:36Z) - Neural Radiance Fields (NeRFs): A Review and Some Recent Developments [0.0]
Neural Radiance Field (NeRF) is a framework that represents a 3D scene in the weights of a fully connected neural network.
NeRFs have become a popular field of research as recent developments have been made that expand the performance and capabilities of the base framework.
arXiv Detail & Related papers (2023-04-30T03:23:58Z) - Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos [69.22032459870242]
We present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time free-view rendering on long-duration dynamic scenes.
We show such a strategy can handle large motions without sacrificing quality.
Based on ReRF, we design a special FVV that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes.
arXiv Detail & Related papers (2023-04-10T08:36:00Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - Neural Rendering of Humans in Novel View and Pose from Monocular Video [68.37767099240236]
We introduce a new method that generates photo-realistic humans under novel views and poses given a monocular video as input.
Our method significantly outperforms existing approaches under unseen poses and novel views given monocular videos as input.
arXiv Detail & Related papers (2022-04-04T03:09:20Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.