ToonAging: Face Re-Aging upon Artistic Portrait Style Transfer
- URL: http://arxiv.org/abs/2402.02733v4
- Date: Tue, 28 May 2024 05:00:41 GMT
- Title: ToonAging: Face Re-Aging upon Artistic Portrait Style Transfer
- Authors: Bumsoo Kim, Abdul Muqeet, Kyuchul Lee, Sanghyun Seo,
- Abstract summary: We introduce a novel one-stage method for face re-aging combined with portrait style transfer.
We leverage existing face re-aging and style transfer networks, both trained within the same PR domain.
Our method offers greater flexibility compared to domain-level fine-tuning approaches.
- Score: 6.305926064192544
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Face re-aging is a prominent field in computer vision and graphics, with significant applications in photorealistic domains such as movies, advertising, and live streaming. Recently, the need to apply face re-aging to non-photorealistic images, like comics, illustrations, and animations, has emerged as an extension in various entertainment sectors. However, the lack of a network that can seamlessly edit the apparent age in NPR images has limited these tasks to a naive, sequential approach. This often results in unpleasant artifacts and a loss of facial attributes due to domain discrepancies. In this paper, we introduce a novel one-stage method for face re-aging combined with portrait style transfer, executed in a single generative step. We leverage existing face re-aging and style transfer networks, both trained within the same PR domain. Our method uniquely fuses distinct latent vectors, each responsible for managing aging-related attributes and NPR appearance. By adopting an exemplar-based approach, our method offers greater flexibility compared to domain-level fine-tuning approaches, which typically require separate training or fine-tuning for each domain. This effectively addresses the limitation of requiring paired datasets for re-aging and domain-level, data-driven approaches for stylization. Our experiments show that our model can effortlessly generate re-aged images while simultaneously transferring the style of examples, maintaining both natural appearance and controllability.
Related papers
- Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models [69.50286698375386]
We propose a novel approach that better harnesses diffusion models for face-swapping.
We introduce a mask shuffling technique during inpainting training, which allows us to create a so-called universal model for swapping.
Ours is a relatively unified approach and so it is resilient to errors in other off-the-shelf models.
arXiv Detail & Related papers (2024-09-11T13:43:53Z) - PortraitBooth: A Versatile Portrait Model for Fast Identity-preserved
Personalization [92.90392834835751]
PortraitBooth is designed for high efficiency, robust identity preservation, and expression-editable text-to-image generation.
PortraitBooth eliminates computational overhead and mitigates identity distortion.
It incorporates emotion-aware cross-attention control for diverse facial expressions in generated images.
arXiv Detail & Related papers (2023-12-11T13:03:29Z) - Effective Adapter for Face Recognition in the Wild [72.75516495170199]
We tackle the challenge of face recognition in the wild, where images often suffer from low quality and real-world distortions.
Traditional approaches-either training models directly on degraded images or their enhanced counterparts using face restoration techniques-have proven ineffective.
We propose an effective adapter for augmenting existing face recognition models trained on high-quality facial datasets.
arXiv Detail & Related papers (2023-12-04T08:55:46Z) - Face Aging via Diffusion-based Editing [5.318584973533008]
We propose FADING, a novel approach to address Face Aging via DIffusion-based editiNG.
We go beyond existing methods by leveraging the rich prior of large-scale language-image diffusion models.
Our method outperforms existing approaches with respect to aging accuracy, attribute preservation, and aging quality.
arXiv Detail & Related papers (2023-09-20T13:47:10Z) - Towards Diverse and Faithful One-shot Adaption of Generative Adversarial
Networks [54.80435295622583]
One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only.
We present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation.
arXiv Detail & Related papers (2022-07-18T16:29:41Z) - SelFSR: Self-Conditioned Face Super-Resolution in the Wild via Flow
Field Degradation Network [12.976199676093442]
We propose a novel domain-adaptive degradation network for face super-resolution in the wild.
Our model achieves state-of-the-art performance on both CelebA and real-world face dataset.
arXiv Detail & Related papers (2021-12-20T17:04:00Z) - Few-shot Image Generation via Cross-domain Correspondence [98.2263458153041]
Training generative models, such as GANs, on a target domain containing limited examples can easily result in overfitting.
In this work, we seek to utilize a large source domain for pretraining and transfer the diversity information from source to target.
To further reduce overfitting, we present an anchor-based strategy to encourage different levels of realism over different regions in the latent space.
arXiv Detail & Related papers (2021-04-13T17:59:35Z) - Only a Matter of Style: Age Transformation Using a Style-Based
Regression Model [46.48263482909809]
We present an image-to-image translation method that learns to encode real facial images into the latent space of a pre-trained unconditional GAN.
We employ a pre-trained age regression network used to explicitly guide the encoder in generating the latent codes corresponding to the desired age.
arXiv Detail & Related papers (2021-02-04T17:33:28Z) - Lifespan Age Transformation Synthesis [40.963816368819415]
We propose a novel image-to-image generative adversarial network architecture.
Our framework can predict a full head portrait for ages 0-70 from a single photo.
arXiv Detail & Related papers (2020-03-21T22:48:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.