DiffBody: Diffusion-based Pose and Shape Editing of Human Images
- URL: http://arxiv.org/abs/2401.02804v2
- Date: Mon, 8 Jan 2024 04:41:30 GMT
- Title: DiffBody: Diffusion-based Pose and Shape Editing of Human Images
- Authors: Yuta Okuyama, Yuki Endo, Yoshihiro Kanamori
- Abstract summary: We propose a one-shot approach that enables large edits with identity preservation.
To enable large edits, we fit a 3D body model, project the input image onto the 3D model, and change the body's pose and shape.
We further enhance the realism by fine-tuning text embeddings via self-supervised learning.
- Score: 1.7188280334580193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pose and body shape editing in a human image has received increasing
attention. However, current methods often struggle with dataset biases and
deteriorate realism and the person's identity when users make large edits. We
propose a one-shot approach that enables large edits with identity
preservation. To enable large edits, we fit a 3D body model, project the input
image onto the 3D model, and change the body's pose and shape. Because this
initial textured body model has artifacts due to occlusion and the inaccurate
body shape, the rendered image undergoes a diffusion-based refinement, in which
strong noise destroys body structure and identity whereas insufficient noise
does not help. We thus propose an iterative refinement with weak noise, applied
first for the whole body and then for the face. We further enhance the realism
by fine-tuning text embeddings via self-supervised learning. Our quantitative
and qualitative evaluations demonstrate that our method outperforms other
existing methods across various datasets.
Related papers
- SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion [35.73448283467723]
SiTH is a novel pipeline that integrates an image-conditioned diffusion model into a 3D mesh reconstruction workflow.
We employ a powerful generative diffusion model to hallucinate unseen back-view appearance based on the input images.
For the latter, we leverage skinned body meshes as guidance to recover full-body texture meshes from the input and back-view images.
arXiv Detail & Related papers (2023-11-27T14:22:07Z) - Cloth2Body: Generating 3D Human Body Mesh from 2D Clothing [54.29207348918216]
Cloth2Body needs to address new and emerging challenges raised by the partial observation of the input and the high diversity of the output.
We propose an end-to-end framework that can accurately estimate 3D body mesh parameterized by pose and shape from a 2D clothing image.
As shown by experimental results, the proposed framework achieves state-of-the-art performance and can effectively recover natural and diverse 3D body meshes from 2D images.
arXiv Detail & Related papers (2023-09-28T06:18:38Z) - ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections [71.46546520120162]
Estimating 3D articulated shapes like animal bodies from monocular images is inherently challenging.
We propose ARTIC3D, a self-supervised framework to reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
We produce realistic animations by fine-tuning the rendered shape and texture under rigid part transformations.
arXiv Detail & Related papers (2023-06-07T17:47:50Z) - Single-view 3D Body and Cloth Reconstruction under Complex Poses [37.86174829271747]
We extend existing implicit function-based models to deal with images of humans with arbitrary poses and self-occluded limbs.
We learn an implicit function that maps the input image to a 3D body shape with a low level of detail.
We then learn a displacement map, conditioned on the smoothed surface, which encodes the high-frequency details of the clothes and body.
arXiv Detail & Related papers (2022-05-09T07:34:06Z) - DANBO: Disentangled Articulated Neural Body Representations via Graph
Neural Networks [12.132886846993108]
High-resolution models enable photo-realistic avatars but at the cost of requiring studio settings not available to end users.
Our goal is to create avatars directly from raw images without relying on expensive studio setups and surface tracking.
We introduce a three-stage method that induces two inductive biases to better disentangled pose-dependent deformation.
arXiv Detail & Related papers (2022-05-03T17:56:46Z) - NeuralReshaper: Single-image Human-body Retouching with Deep Neural
Networks [50.40798258968408]
We present NeuralReshaper, a novel method for semantic reshaping of human bodies in single images using deep generative networks.
Our approach follows a fit-then-reshape pipeline, which first fits a parametric 3D human model to a source human image.
To deal with the lack-of-data problem that no paired data exist, we introduce a novel self-supervised strategy to train our network.
arXiv Detail & Related papers (2022-03-20T09:02:13Z) - Learning Realistic Human Reposing using Cyclic Self-Supervision with 3D
Shape, Pose, and Appearance Consistency [55.94908688207493]
We propose a self-supervised framework named SPICE that closes the image quality gap with supervised methods.
The key insight enabling self-supervision is to exploit 3D information about the human body in several ways.
SPICE achieves state-of-the-art performance on the DeepFashion dataset.
arXiv Detail & Related papers (2021-10-11T17:48:50Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Neural Re-Rendering of Humans from a Single Image [80.53438609047896]
We propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint.
Our algorithm represents body pose and shape as a parametric mesh which can be reconstructed from a single image.
arXiv Detail & Related papers (2021-01-11T18:53:47Z) - Multi-View Consistency Loss for Improved Single-Image 3D Reconstruction
of Clothed People [36.30755368202957]
We present a novel method to improve the accuracy of the 3D reconstruction of clothed human shape from a single image.
The accuracy and completeness for reconstruction of clothed people is limited due to the large variation in shape resulting from clothing, hair, body size, pose and camera viewpoint.
arXiv Detail & Related papers (2020-09-29T17:18:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.