Diffusion-HPC: Synthetic Data Generation for Human Mesh Recovery in
Challenging Domains
- URL: http://arxiv.org/abs/2303.09541v2
- Date: Sun, 31 Dec 2023 00:17:33 GMT
- Title: Diffusion-HPC: Synthetic Data Generation for Human Mesh Recovery in
Challenging Domains
- Authors: Zhenzhen Weng, Laura Bravo-S\'anchez, Serena Yeung-Levy
- Abstract summary: We propose a text-conditioned method that generates photo-realistic images with plausible posed humans by injecting prior knowledge about human body structure.
Our generated images are accompanied by 3D meshes that serve as ground truths for improving Human Mesh Recovery tasks.
- Score: 2.7624021966289605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent text-to-image generative models have exhibited remarkable abilities in
generating high-fidelity and photo-realistic images. However, despite the
visually impressive results, these models often struggle to preserve plausible
human structure in the generations. Due to this reason, while generative models
have shown promising results in aiding downstream image recognition tasks by
generating large volumes of synthetic data, they are not suitable for improving
downstream human pose perception and understanding. In this work, we propose a
Diffusion model with Human Pose Correction (Diffusion-HPC), a text-conditioned
method that generates photo-realistic images with plausible posed humans by
injecting prior knowledge about human body structure. Our generated images are
accompanied by 3D meshes that serve as ground truths for improving Human Mesh
Recovery tasks, where a shortage of 3D training data has long been an issue.
Furthermore, we show that Diffusion-HPC effectively improves the realism of
human generations under varying conditioning strategies.
Related papers
- HumanGif: Single-View Human Diffusion with Generative Prior [25.516544735593087]
Motivated by the success of 2D character animation, we propose strong>HumanGif/strong>, a single-view human diffusion model with generative priors.
We formulate the single-view-based 3D human novel view and pose synthesis as a single-view-conditioned human diffusion process.
arXiv Detail & Related papers (2025-02-17T17:55:27Z) - GAS: Generative Avatar Synthesis from a Single Image [54.95198111659466]
We introduce a generalizable and unified framework to synthesize view-consistent and temporally coherent avatars from a single image.
Our approach bridges this gap by combining the reconstruction power of regression-based 3D human reconstruction with the generative capabilities of a diffusion model.
arXiv Detail & Related papers (2025-02-10T19:00:39Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Scaling by training on large datasets has been shown to enhance the quality and fidelity of image generation and manipulation with diffusion models.
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
Our results demonstrate significant performance gains in various scenarios when combined with different fine-tuning schemes.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - GeneMAN: Generalizable Single-Image 3D Human Reconstruction from Multi-Source Human Data [61.05815629606135]
Given a single in-the-wild human photo, it remains a challenging task to reconstruct a high-fidelity 3D human model.
GeneMAN builds upon a comprehensive collection of high-quality human data.
GeneMAN could generate high-quality 3D human models from a single image input, outperforming prior state-of-the-art methods.
arXiv Detail & Related papers (2024-11-27T18:59:54Z) - Detecting Human Artifacts from Text-to-Image Models [16.261759535724778]
This dataset contains images containing images containing images containing a human body.
Images include images of poorly generated human bodies, including distorted and missing parts of the human body.
arXiv Detail & Related papers (2024-11-21T05:02:13Z) - PSHuman: Photorealistic Single-view Human Reconstruction using Cross-Scale Diffusion [43.850899288337025]
PSHuman is a novel framework that explicitly reconstructs human meshes utilizing priors from the multiview diffusion model.
It is found that directly applying multiview diffusion on single-view human images leads to severe geometric distortions.
To enhance cross-view body shape consistency of varied human poses, we condition the generative model on parametric models like SMPL-X.
arXiv Detail & Related papers (2024-09-16T10:13:06Z) - Boost Your Own Human Image Generation Model via Direct Preference Optimization with AI Feedback [5.9726297901501475]
We introduce a novel approach tailored specifically for human image generation utilizing Direct Preference Optimization (DPO)
Specifically, we introduce an efficient method for constructing a specialized DPO dataset for training human image generation models without the need for costly human feedback.
Our method demonstrates its versatility and effectiveness in generating human images, including personalized text-to-image generation.
arXiv Detail & Related papers (2024-05-30T16:18:05Z) - InceptionHuman: Controllable Prompt-to-NeRF for Photorealistic 3D Human Generation [61.62346472443454]
InceptionHuman is a prompt-to-NeRF framework that allows easy control via a combination of prompts in different modalities to generate photorealistic 3D humans.
InceptionHuman achieves consistent 3D human generation within a progressively refined NeRF space.
arXiv Detail & Related papers (2023-11-27T15:49:41Z) - HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion [114.15397904945185]
We propose a unified framework, HyperHuman, that generates in-the-wild human images of high realism and diverse layouts.
Our model enforces the joint learning of image appearance, spatial relationship, and geometry in a unified network.
Our framework yields the state-of-the-art performance, generating hyper-realistic human images under diverse scenarios.
arXiv Detail & Related papers (2023-10-12T17:59:34Z) - DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via
Diffusion Models [55.71306021041785]
We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars.
We leverage the SMPL model to provide shape and pose guidance for the generation.
We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face ''Janus'' problem.
arXiv Detail & Related papers (2023-04-03T12:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.