Deforming Radiance Fields with Cages
- URL: http://arxiv.org/abs/2207.12298v1
- Date: Mon, 25 Jul 2022 16:08:55 GMT
- Title: Deforming Radiance Fields with Cages
- Authors: Tianhan Xu and Tatsuya Harada
- Abstract summary: We propose a new type of deformation of the radiance field: free-form radiance field deformation.
We use a triangular mesh that encloses the foreground object called cage as an interface.
We propose a novel formulation to extend it to the radiance field, which maps the position and the view direction of the sampling points from the deformed space to the canonical space.
- Score: 65.57101724686527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in radiance fields enable photorealistic rendering of static
or dynamic 3D scenes, but still do not support explicit deformation that is
used for scene manipulation or animation. In this paper, we propose a method
that enables a new type of deformation of the radiance field: free-form
radiance field deformation. We use a triangular mesh that encloses the
foreground object called cage as an interface, and by manipulating the cage
vertices, our approach enables the free-form deformation of the radiance field.
The core of our approach is cage-based deformation which is commonly used in
mesh deformation. We propose a novel formulation to extend it to the radiance
field, which maps the position and the view direction of the sampling points
from the deformed space to the canonical space, thus enabling the rendering of
the deformed scene. The deformation results of the synthetic datasets and the
real-world datasets demonstrate the effectiveness of our approach.
Related papers
- DynoSurf: Neural Deformation-based Temporally Consistent Dynamic Surface Reconstruction [93.18586302123633]
This paper explores the problem of reconstructing temporally consistent surfaces from a 3D point cloud sequence without correspondence.
We propose DynoSurf, an unsupervised learning framework integrating a template surface representation with a learnable deformation field.
Experimental results demonstrate the significant superiority of DynoSurf over current state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T08:58:48Z) - Point-Based Radiance Fields for Controllable Human Motion Synthesis [7.322100850632633]
This paper proposes a controllable human motion synthesis method for fine-level deformation based on static point-based radiance fields.
Our method exploits the explicit point cloud to train the static 3D scene and apply the deformation by encoding the point cloud translation.
Our approach can significantly outperform the state-of-the-art on fine-level complex deformation which can be generalized to other 3D characters besides humans.
arXiv Detail & Related papers (2023-10-05T08:27:33Z) - Neural Shape Deformation Priors [14.14047635248036]
We present Neural Shape Deformation Priors, a novel method for shape manipulation.
We learn the deformation behavior based on the underlying geometric properties of a shape.
Our method can be applied to challenging deformations and generalizes well to unseen deformations.
arXiv Detail & Related papers (2022-10-11T17:03:25Z) - Generative Deformable Radiance Fields for Disentangled Image Synthesis
of Topology-Varying Objects [52.46838926521572]
3D-aware generative models have demonstrated their superb performance to generate 3D neural radiance fields (NeRF) from a collection of monocular 2D images.
We propose a generative model for synthesizing radiance fields of topology-varying objects with disentangled shape and appearance variations.
arXiv Detail & Related papers (2022-09-09T08:44:06Z) - NeRF-Editing: Geometry Editing of Neural Radiance Fields [43.256317094173795]
Implicit neural rendering has shown great potential in novel view synthesis of a scene.
We propose a method that allows users to perform controllable shape deformation on the implicit representation of the scene.
Our framework can achieve ideal editing results not only on synthetic data, but also on real scenes captured by users.
arXiv Detail & Related papers (2022-05-10T15:35:52Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.