Point-Based Radiance Fields for Controllable Human Motion Synthesis
- URL: http://arxiv.org/abs/2310.03375v1
- Date: Thu, 5 Oct 2023 08:27:33 GMT
- Title: Point-Based Radiance Fields for Controllable Human Motion Synthesis
- Authors: Haitao Yu, Deheng Zhang, Peiyuan Xie, Tianyi Zhang
- Abstract summary: This paper proposes a controllable human motion synthesis method for fine-level deformation based on static point-based radiance fields.
Our method exploits the explicit point cloud to train the static 3D scene and apply the deformation by encoding the point cloud translation.
Our approach can significantly outperform the state-of-the-art on fine-level complex deformation which can be generalized to other 3D characters besides humans.
- Score: 7.322100850632633
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a novel controllable human motion synthesis method for
fine-level deformation based on static point-based radiance fields. Although
previous editable neural radiance field methods can generate impressive results
on novel-view synthesis and allow naive deformation, few algorithms can achieve
complex 3D human editing such as forward kinematics. Our method exploits the
explicit point cloud to train the static 3D scene and apply the deformation by
encoding the point cloud translation using a deformation MLP. To make sure the
rendering result is consistent with the canonical space training, we estimate
the local rotation using SVD and interpolate the per-point rotation to the
query view direction of the pre-trained radiance field. Extensive experiments
show that our approach can significantly outperform the state-of-the-art on
fine-level complex deformation which can be generalized to other 3D characters
besides humans.
Related papers
- 3D Gaussian Editing with A Single Image [19.662680524312027]
We introduce a novel single-image-driven 3D scene editing approach based on 3D Gaussian Splatting.
Our method learns to optimize the 3D Gaussians to align with an edited version of the image rendered from a user-specified viewpoint.
Experiments show the effectiveness of our method in handling geometric details, long-range, and non-rigid deformation.
arXiv Detail & Related papers (2024-08-14T13:17:42Z) - Generalizable Human Gaussians for Sparse View Synthesis [48.47812125126829]
This paper introduces a new method to learn generalizable human Gaussians that allows photorealistic and accurate view-rendering of a new human subject from a limited set of sparse views.
A pivotal innovation of our approach involves reformulating the learning of 3D Gaussian parameters into a regression process defined on the 2D UV space of a human template.
Our method outperforms recent methods on both within-dataset generalization as well as cross-dataset generalization settings.
arXiv Detail & Related papers (2024-07-17T17:56:30Z) - SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes [59.23385953161328]
Novel view synthesis for dynamic scenes is still a challenging problem in computer vision and graphics.
We propose a new representation that explicitly decomposes the motion and appearance of dynamic scenes into sparse control points and dense Gaussians.
Our method can enable user-controlled motion editing while retaining high-fidelity appearances.
arXiv Detail & Related papers (2023-12-04T11:57:14Z) - Semantic 3D-aware Portrait Synthesis and Manipulation Based on
Compositional Neural Radiance Field [55.431697263581626]
We propose a Compositional Neural Radiance Field (CNeRF) for semantic 3D-aware portrait synthesis and manipulation.
CNeRF divides the image by semantic regions and learns an independent neural radiance field for each region, and finally fuses them and renders the complete image.
Compared to state-of-the-art 3D-aware GAN methods, our approach enables fine-grained semantic region manipulation, while maintaining high-quality 3D-consistent synthesis.
arXiv Detail & Related papers (2023-02-03T07:17:46Z) - Learned Vertex Descent: A New Direction for 3D Human Model Fitting [64.04726230507258]
We propose a novel optimization-based paradigm for 3D human model fitting on images and scans.
Our approach is able to capture the underlying body of clothed people with very different body shapes, achieving a significant improvement compared to state-of-the-art.
LVD is also applicable to 3D model fitting of humans and hands, for which we show a significant improvement to the SOTA with a much simpler and faster method.
arXiv Detail & Related papers (2022-05-12T17:55:51Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Animatable Neural Radiance Fields for Human Body Modeling [54.41477114385557]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce neural blend weight fields to produce the deformation fields.
Experiments show that our approach significantly outperforms recent human methods.
arXiv Detail & Related papers (2021-05-06T17:58:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.