RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images
- URL: http://arxiv.org/abs/2409.03644v2
- Date: Wed, 13 Nov 2024 01:45:31 GMT
- Title: RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images
- Authors: Benzhi Wang, Jingkai Zhou, Jingqi Bai, Yang Yang, Weihua Chen, Fan Wang, Zhen Lei,
- Abstract summary: We propose a novel post-processing solution named RealisHuman.
It generates realistic human parts, such as hands or faces, using the original parts as references.
Second, it seamlessly integrates the rectified human parts back into their corresponding positions.
- Score: 24.042262870735087
- License:
- Abstract: In recent years, diffusion models have revolutionized visual generation, outperforming traditional frameworks like Generative Adversarial Networks (GANs). However, generating images of humans with realistic semantic parts, such as hands and faces, remains a significant challenge due to their intricate structural complexity. To address this issue, we propose a novel post-processing solution named RealisHuman. The RealisHuman framework operates in two stages. First, it generates realistic human parts, such as hands or faces, using the original malformed parts as references, ensuring consistent details with the original image. Second, it seamlessly integrates the rectified human parts back into their corresponding positions by repainting the surrounding areas to ensure smooth and realistic blending. The RealisHuman framework significantly enhances the realism of human generation, as demonstrated by notable improvements in both qualitative and quantitative metrics. Code is available at https://github.com/Wangbenzhi/RealisHuman.
Related papers
- Single Image, Any Face: Generalisable 3D Face Generation [59.9369171926757]
We propose a novel model, Gen3D-Face, which generates 3D human faces with unconstrained single image input.
To the best of our knowledge, this is the first attempt and benchmark for creating photorealistic 3D human face avatars from single images.
arXiv Detail & Related papers (2024-09-25T14:56:37Z) - PSHuman: Photorealistic Single-view Human Reconstruction using Cross-Scale Diffusion [43.850899288337025]
PSHuman is a novel framework that explicitly reconstructs human meshes utilizing priors from the multiview diffusion model.
It is found that directly applying multiview diffusion on single-view human images leads to severe geometric distortions.
To enhance cross-view body shape consistency of varied human poses, we condition the generative model on parametric models like SMPL-X.
arXiv Detail & Related papers (2024-09-16T10:13:06Z) - HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion [114.15397904945185]
We propose a unified framework, HyperHuman, that generates in-the-wild human images of high realism and diverse layouts.
Our model enforces the joint learning of image appearance, spatial relationship, and geometry in a unified network.
Our framework yields the state-of-the-art performance, generating hyper-realistic human images under diverse scenarios.
arXiv Detail & Related papers (2023-10-12T17:59:34Z) - Novel View Synthesis of Humans using Differentiable Rendering [50.57718384229912]
We present a new approach for synthesizing novel views of people in new poses.
Our synthesis makes use of diffuse Gaussian primitives that represent the underlying skeletal structure of a human.
Rendering these primitives gives results in a high-dimensional latent image, which is then transformed into an RGB image by a decoder network.
arXiv Detail & Related papers (2023-03-28T10:48:33Z) - Pose Guided Human Image Synthesis with Partially Decoupled GAN [25.800174118151638]
Pose Guided Human Image Synthesis (PGHIS) is a challenging task of transforming a human image from the reference pose to a target pose.
We propose a method by decoupling the human body into several parts to guide the synthesis of a realistic image of the person.
In addition, we design a multi-head attention-based module for PGHIS.
arXiv Detail & Related papers (2022-10-07T15:31:37Z) - DeepPortraitDrawing: Generating Human Body Images from Freehand Sketches [75.4318318890065]
We present DeepDrawing, a framework for converting roughly drawn sketches to realistic human body images.
To encode complicated body shapes under various poses, we take a local-to-global approach.
Our method produces more realistic images than the state-of-the-art sketch-to-image synthesis techniques.
arXiv Detail & Related papers (2022-05-04T14:02:45Z) - Realistic Full-Body Anonymization with Surface-Guided GANs [7.37907896341367]
We propose a new anonymization method that generates realistic humans for in-the-wild images.
A key part of our design is to guide adversarial nets by dense pixel-to-surface correspondences between an image and a canonical 3D surface.
We demonstrate that surface guidance significantly improves image quality and diversity of samples, yielding a highly practical generator.
arXiv Detail & Related papers (2022-01-06T18:57:59Z) - Style and Pose Control for Image Synthesis of Humans from a Single
Monocular View [78.6284090004218]
StylePoseGAN is a non-controllable generator to accept conditioning of pose and appearance separately.
Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts.
StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics.
arXiv Detail & Related papers (2021-02-22T18:50:47Z) - Learning Inverse Rendering of Faces from Real-world Videos [52.313931830408386]
Existing methods decompose a face image into three components (albedo, normal, and illumination) by supervised training on synthetic data.
We propose a weakly supervised training approach to train our model on real face videos, based on the assumption of consistency of albedo and normal.
Our network is trained on both real and synthetic data, benefiting from both.
arXiv Detail & Related papers (2020-03-26T17:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.