Neural Parameterization for Dynamic Human Head Editing
- URL: http://arxiv.org/abs/2207.00210v1
- Date: Fri, 1 Jul 2022 05:25:52 GMT
- Title: Neural Parameterization for Dynamic Human Head Editing
- Authors: Li Ma, Xiaoyu Li, Jing Liao, Xuan Wang, Qi Zhang, Jue Wang, Pedro
Sander
- Abstract summary: We present Neuralization (NeP), a hybrid representation that provides the advantages of both implicit and explicit methods.
NeP is capable of photo-realistic rendering while allowing fine-grained editing of the scene geometry and appearance.
The results show that the NeP achieves almost the same level of rendering accuracy while maintaining high editability.
- Score: 26.071370285285465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit radiance functions emerged as a powerful scene representation for
reconstructing and rendering photo-realistic views of a 3D scene. These
representations, however, suffer from poor editability. On the other hand,
explicit representations such as polygonal meshes allow easy editing but are
not as suitable for reconstructing accurate details in dynamic human heads,
such as fine facial features, hair, teeth, and eyes. In this work, we present
Neural Parameterization (NeP), a hybrid representation that provides the
advantages of both implicit and explicit methods. NeP is capable of
photo-realistic rendering while allowing fine-grained editing of the scene
geometry and appearance. We first disentangle the geometry and appearance by
parameterizing the 3D geometry into 2D texture space. We enable geometric
editability by introducing an explicit linear deformation blending layer. The
deformation is controlled by a set of sparse key points, which can be
explicitly and intuitively displaced to edit the geometry. For appearance, we
develop a hybrid 2D texture consisting of an explicit texture map for easy
editing and implicit view and time-dependent residuals to model temporal and
view variations. We compare our method to several reconstruction and editing
baselines. The results show that the NeP achieves almost the same level of
rendering accuracy while maintaining high editability.
Related papers
- Image Sculpting: Precise Object Editing with 3D Geometry Control [33.9777412846583]
Image Sculpting is a new framework for editing 2D images by incorporating tools from 3D geometry and graphics.
It supports precise, quantifiable, and physically-plausible editing options such as pose editing, rotation, translation, 3D composition, carving, and serial addition.
arXiv Detail & Related papers (2024-01-02T18:59:35Z) - Learning Naturally Aggregated Appearance for Efficient 3D Editing [94.47518916521065]
We propose to replace the color field with an explicit 2D appearance aggregation, also called canonical image.
To avoid the distortion effect and facilitate convenient editing, we complement the canonical image with a projection field that maps 3D points onto 2D pixels for texture lookup.
Our representation, dubbed AGAP, well supports various ways of 3D editing (e.g., stylization, interactive drawing, and content extraction) with no need of re-optimization.
arXiv Detail & Related papers (2023-12-11T18:59:31Z) - Text-Guided 3D Face Synthesis -- From Generation to Editing [53.86765812392627]
We propose a unified text-guided framework from face generation to editing.
We employ a fine-tuned texture diffusion model to enhance texture quality in both RGB and YUV space.
We propose a self-guided consistency weight strategy to improve editing efficacy while preserving consistency.
arXiv Detail & Related papers (2023-12-01T06:36:23Z) - TADA! Text to Animatable Digital Avatars [57.52707683788961]
TADA takes textual descriptions and produces expressive 3D avatars with high-quality geometry and lifelike textures.
We derive an optimizable high-resolution body model from SMPL-X with 3D displacements and a texture map.
We render normals and RGB images of the generated character and exploit their latent embeddings in the SDS training process.
arXiv Detail & Related papers (2023-08-21T17:59:10Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - NeuTex: Neural Texture Mapping for Volumetric Neural Rendering [48.83181790635772]
We present an approach that explicitly disentangles geometry--represented as a continuous 3D volume--from appearance--represented as a continuous 2D texture map.
We demonstrate that this representation can be reconstructed using only multi-view image supervision and generates high-quality rendering results.
arXiv Detail & Related papers (2021-03-01T05:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.