Realistic Hair Synthesis with Generative Adversarial Networks
- URL: http://arxiv.org/abs/2209.12875v1
- Date: Tue, 13 Sep 2022 11:48:26 GMT
- Title: Realistic Hair Synthesis with Generative Adversarial Networks
- Authors: Muhammed Pektas, Aybars Ugur
- Abstract summary: In this thesis, a generative adversarial network method is proposed to solve the hair synthesis problem.
While developing this method, it is aimed to achieve real-time hair synthesis while achieving visual outputs that compete with the best methods in the literature.
- Score: 1.8275108630751844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent successes in generative modeling have accelerated studies on this
subject and attracted the attention of researchers. One of the most important
methods used to achieve this success is Generative Adversarial Networks (GANs).
It has many application areas such as; virtual reality (VR), augmented reality
(AR), super resolution, image enhancement. Despite the recent advances in hair
synthesis and style transfer using deep learning and generative modelling, due
to the complex nature of hair still contains unsolved challenges. The methods
proposed in the literature to solve this problem generally focus on making
high-quality hair edits on images. In this thesis, a generative adversarial
network method is proposed to solve the hair synthesis problem. While
developing this method, it is aimed to achieve real-time hair synthesis while
achieving visual outputs that compete with the best methods in the literature.
The proposed method was trained with the FFHQ dataset and then its results in
hair style transfer and hair reconstruction tasks were evaluated. The results
obtained in these tasks and the operating time of the method were compared with
MichiGAN, one of the best methods in the literature. The comparison was made at
a resolution of 128x128. As a result of the comparison, it has been shown that
the proposed method achieves competitive results with MichiGAN in terms of
realistic hair synthesis, and performs better in terms of operating time.
Related papers
- Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors [75.24313405671433]
Diffusion-based image super-resolution (SR) methods have achieved remarkable success by leveraging large pre-trained text-to-image diffusion models as priors.
We introduce a novel one-step SR model, which significantly addresses the efficiency issue of diffusion-based SR methods.
Unlike existing fine-tuning strategies, we designed a degradation-guided Low-Rank Adaptation (LoRA) module specifically for SR.
arXiv Detail & Related papers (2024-09-25T16:15:21Z) - HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach [3.737361598712633]
We present the HairFast model, which achieves high resolution, near real-time performance, and superior reconstruction.
Our solution includes a new architecture operating in the FS latent space of StyleGAN.
In the most difficult scenario of transferring both shape and color of a hairstyle from different images, our method performs in less than a second on the Nvidia V100.
arXiv Detail & Related papers (2024-04-01T12:59:49Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - Dr.Hair: Reconstructing Scalp-Connected Hair Strands without Pre-training via Differentiable Rendering of Line Segments [23.71057752711745]
In the film and gaming industries, achieving a realistic hair appearance typically involves the use of strands originating from the scalp.
In this study, we propose an optimization-based approach that eliminates the need for pre-training.
Our method exhibits robust and accurate inverse rendering, surpassing the quality of existing methods and significantly improving processing speed.
arXiv Detail & Related papers (2024-03-26T08:53:25Z) - SinSR: Diffusion-Based Image Super-Resolution in a Single Step [119.18813219518042]
Super-resolution (SR) methods based on diffusion models exhibit promising results.
But their practical application is hindered by the substantial number of required inference steps.
We propose a simple yet effective method for achieving single-step SR generation, named SinSR.
arXiv Detail & Related papers (2023-11-23T16:21:29Z) - Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction [4.714310894654027]
This work proposes an approach capable of accurate hair geometry reconstruction at a strand level from a monocular video or multi-view images captured in uncontrolled conditions.
The combined system, named Neural Haircut, achieves high realism and personalization of the reconstructed hairstyles.
arXiv Detail & Related papers (2023-06-09T13:08:34Z) - IRGen: Generative Modeling for Image Retrieval [82.62022344988993]
In this paper, we present a novel methodology, reframing image retrieval as a variant of generative modeling.
We develop our model, dubbed IRGen, to address the technical challenge of converting an image into a concise sequence of semantic units.
Our model achieves state-of-the-art performance on three widely-used image retrieval benchmarks and two million-scale datasets.
arXiv Detail & Related papers (2023-03-17T17:07:36Z) - Efficient Hair Style Transfer with Generative Adversarial Networks [7.312180925669325]
We propose a novel hairstyle transfer method, called EHGAN, which reduces computational costs to enable real-time processing.
To achieve this goal, we train an encoder and a low-resolution generator to transfer hairstyle and then, increase the resolution of results with a pre-trained super-resolution model.
EHGAN needs around 2.7 times and over 10,000 times less time consumption than the state-of-the-art MichiGAN and LOHO methods respectively.
arXiv Detail & Related papers (2022-10-22T18:56:16Z) - HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair
Performance Capture [11.645769995924548]
Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance.
In this paper, we use a novel, volumetric hair representation that is com-posed of thousands of primitives.
Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals.
arXiv Detail & Related papers (2021-12-13T18:57:50Z) - HairCLIP: Design Your Hair by Text and Reference Image [100.85116679883724]
This paper proposes a new hair editing interaction mode, which enables manipulating hair attributes individually or jointly.
We encode the image and text conditions in a shared embedding space and propose a unified hair editing framework.
With the carefully designed network structures and loss functions, our framework can perform high-quality hair editing.
arXiv Detail & Related papers (2021-12-09T18:59:58Z) - A Simple Baseline for StyleGAN Inversion [133.5868210969111]
StyleGAN inversion plays an essential role in enabling the pretrained StyleGAN to be used for real facial image editing tasks.
Existing optimization-based methods can produce high quality results, but the optimization often takes a long time.
We present a new feed-forward network for StyleGAN inversion, with significant improvement in terms of efficiency and quality.
arXiv Detail & Related papers (2021-04-15T17:59:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.