StyleRetoucher: Generalized Portrait Image Retouching with GAN Priors
- URL: http://arxiv.org/abs/2312.14389v1
- Date: Fri, 22 Dec 2023 02:32:19 GMT
- Title: StyleRetoucher: Generalized Portrait Image Retouching with GAN Priors
- Authors: Wanchao Su, Can Wang, Chen Liu, Hangzhou Han, Hongbo Fu, Jing Liao
- Abstract summary: StyleRetoucher is a novel automatic portrait image retouching framework.
Our method improves an input portrait image's skin condition while preserving its facial details.
We propose a novel blemish-aware feature selection mechanism to effectively identify and remove the skin blemishes.
- Score: 30.000584682643183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating fine-retouched portrait images is tedious and time-consuming even
for professional artists. There exist automatic retouching methods, but they
either suffer from over-smoothing artifacts or lack generalization ability. To
address such issues, we present StyleRetoucher, a novel automatic portrait
image retouching framework, leveraging StyleGAN's generation and generalization
ability to improve an input portrait image's skin condition while preserving
its facial details. Harnessing the priors of pretrained StyleGAN, our method
shows superior robustness: a). performing stably with fewer training samples
and b). generalizing well on the out-domain data. Moreover, by blending the
spatial features of the input image and intermediate features of the StyleGAN
layers, our method preserves the input characteristics to the largest extent.
We further propose a novel blemish-aware feature selection mechanism to
effectively identify and remove the skin blemishes, improving the image skin
condition. Qualitative and quantitative evaluations validate the great
generalization capability of our method. Further experiments show
StyleRetoucher's superior performance to the alternative solutions in the image
retouching task. We also conduct a user perceptive study to confirm the
superior retouching performance of our method over the existing
state-of-the-art alternatives.
Related papers
- AuthFace: Towards Authentic Blind Face Restoration with Face-oriented Generative Diffusion Prior [13.27748226506837]
Blind face restoration (BFR) is a fundamental and challenging problem in computer vision.
Recent research endeavors rely on facial image priors from the powerful pretrained text-to-image (T2I) diffusion models.
We propose AuthFace, which achieves highly authentic face restoration results by exploring a face-oriented generative diffusion prior.
arXiv Detail & Related papers (2024-10-13T14:56:13Z) - Overcoming False Illusions in Real-World Face Restoration with Multi-Modal Guided Diffusion Model [55.46927355649013]
We introduce a novel Multi-modal Guided Real-World Face Restoration technique.
MGFR can mitigate the generation of false facial attributes and identities.
We present the Reface-HQ dataset, comprising over 23,000 high-resolution facial images across 5,000 identities.
arXiv Detail & Related papers (2024-10-05T13:46:56Z) - Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models [69.50286698375386]
We propose a novel approach that better harnesses diffusion models for face-swapping.
We introduce a mask shuffling technique during inpainting training, which allows us to create a so-called universal model for swapping.
Ours is a relatively unified approach and so it is resilient to errors in other off-the-shelf models.
arXiv Detail & Related papers (2024-09-11T13:43:53Z) - ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - Portrait Diffusion: Training-free Face Stylization with
Chain-of-Painting [64.43760427752532]
Face stylization refers to the transformation of a face into a specific portrait style.
Current methods require the use of example-based adaptation approaches to fine-tune pre-trained generative models.
This paper proposes a training-free face stylization framework, named Portrait Diffusion.
arXiv Detail & Related papers (2023-12-03T06:48:35Z) - MagiCapture: High-Resolution Multi-Concept Portrait Customization [34.131515004434846]
MagiCapture is a personalization method for integrating subject and style concepts to generate high-resolution portrait images.
We present a novel Attention Refocusing loss coupled with auxiliary priors, both of which facilitate robust learning within this weakly supervised learning setting.
Our pipeline also includes additional post-processing steps to ensure the creation of highly realistic outputs.
arXiv Detail & Related papers (2023-09-13T11:37:04Z) - ARF: Artistic Radiance Fields [63.79314417413371]
We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.
Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors.
We propose to stylize the more robust radiance field representation.
arXiv Detail & Related papers (2022-06-13T17:55:31Z) - Retrieve in Style: Unsupervised Facial Feature Transfer and Retrieval [17.833454714281757]
Retrieve in Style (RIS) is an unsupervised framework for fine-grained facial feature transfer and retrieval on real images.
RIS achieves both high-fidelity feature transfers and accurate fine-grained retrievals on real images.
arXiv Detail & Related papers (2021-07-13T17:31:34Z) - Generating Person Images with Appearance-aware Pose Stylizer [66.44220388377596]
We present a novel end-to-end framework to generate realistic person images based on given person poses and appearances.
The core of our framework is a novel generator called Appearance-aware Pose Stylizer (APS) which generates human images by coupling the target pose with the conditioned person appearance progressively.
arXiv Detail & Related papers (2020-07-17T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.