Personalized Restoration via Dual-Pivot Tuning
- URL: http://arxiv.org/abs/2312.17234v1
- Date: Thu, 28 Dec 2023 18:57:49 GMT
- Title: Personalized Restoration via Dual-Pivot Tuning
- Authors: Pradyumna Chari, Sizhuo Ma, Daniil Ostashev, Achuta Kadambi,
Gurunandan Krishnan, Jian Wang, Kfir Aberman
- Abstract summary: We propose a simple, yet effective, method for personalized restoration, called Dual-Pivot Tuning.
Our key observation is that for optimal personalization, the generative model should be tuned around a fixed text pivot.
This approach ensures that personalization does not interfere with the restoration process, resulting in a natural appearance with high fidelity to the person's identity and the attributes of the degraded image.
- Score: 18.912158172904654
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative diffusion models can serve as a prior which ensures that solutions
of image restoration systems adhere to the manifold of natural images. However,
for restoring facial images, a personalized prior is necessary to accurately
represent and reconstruct unique facial features of a given individual. In this
paper, we propose a simple, yet effective, method for personalized restoration,
called Dual-Pivot Tuning - a two-stage approach that personalize a blind
restoration system while maintaining the integrity of the general prior and the
distinct role of each component. Our key observation is that for optimal
personalization, the generative model should be tuned around a fixed text
pivot, while the guiding network should be tuned in a generic
(non-personalized) manner, using the personalized generative model as a fixed
``pivot". This approach ensures that personalization does not interfere with
the restoration process, resulting in a natural appearance with high fidelity
to the person's identity and the attributes of the degraded image. We evaluated
our approach both qualitatively and quantitatively through extensive
experiments with images of widely recognized individuals, comparing it against
relevant baselines. Surprisingly, we found that our personalized prior not only
achieves higher fidelity to identity with respect to the person's identity, but
also outperforms state-of-the-art generic priors in terms of general image
quality. Project webpage: https://personalized-restoration.github.io
Related papers
- Imagine yourself: Tuning-Free Personalized Image Generation [39.63411174712078]
We introduce Imagine yourself, a state-of-the-art model designed for personalized image generation.
It operates as a tuning-free model, enabling all users to leverage a shared framework without individualized adjustments.
Our study demonstrates that Imagine yourself surpasses the state-of-the-art personalization model, exhibiting superior capabilities in identity preservation, visual quality, and text alignment.
arXiv Detail & Related papers (2024-09-20T09:21:49Z) - JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized Text-to-Image Generation [49.997839600988875]
Existing personalization methods rely on finetuning a text-to-image foundation model on a user's custom dataset.
We propose Joint-Image Diffusion (jedi), an effective technique for learning a finetuning-free personalization model.
Our model achieves state-of-the-art generation quality, both quantitatively and qualitatively, significantly outperforming both the prior finetuning-based and finetuning-free personalization baselines.
arXiv Detail & Related papers (2024-07-08T17:59:02Z) - PFStorer: Personalized Face Restoration and Super-Resolution [19.479263766534345]
Recent developments in face restoration have achieved remarkable results in producing high-quality and lifelike outputs.
The stunning results however often fail to be faithful with respect to the identity of the person as the models lack necessary context.
In our approach a restoration model is personalized using a few images of the identity, leading to tailored restoration with respect to the identity while retaining fine-grained details.
arXiv Detail & Related papers (2024-03-13T11:39:30Z) - Restoration by Generation with Constrained Priors [25.906981634736795]
We propose a method to adapt a pretrained diffusion model for image restoration by simply adding noise to the input image to be restored and then denoise.
We show superior performances on multiple real-world restoration datasets in preserving identity and image quality.
This approach allows us to produce results that accurately preserve high-frequency details, which previous works are unable to do.
arXiv Detail & Related papers (2023-12-28T17:50:54Z) - PortraitBooth: A Versatile Portrait Model for Fast Identity-preserved
Personalization [92.90392834835751]
PortraitBooth is designed for high efficiency, robust identity preservation, and expression-editable text-to-image generation.
PortraitBooth eliminates computational overhead and mitigates identity distortion.
It incorporates emotion-aware cross-attention control for diverse facial expressions in generated images.
arXiv Detail & Related papers (2023-12-11T13:03:29Z) - FaceStudio: Put Your Face Everywhere in Seconds [23.381791316305332]
Identity-preserving image synthesis seeks to maintain a subject's identity while adding a personalized, stylistic touch.
Traditional methods, such as Textual Inversion and DreamBooth, have made strides in custom image creation.
Our research introduces a novel approach to identity-preserving synthesis, with a particular focus on human images.
arXiv Detail & Related papers (2023-12-05T11:02:45Z) - Effective Adapter for Face Recognition in the Wild [72.75516495170199]
We tackle the challenge of face recognition in the wild, where images often suffer from low quality and real-world distortions.
Traditional approaches-either training models directly on degraded images or their enhanced counterparts using face restoration techniques-have proven ineffective.
We propose an effective adapter for augmenting existing face recognition models trained on high-quality facial datasets.
arXiv Detail & Related papers (2023-12-04T08:55:46Z) - Identity Encoder for Personalized Diffusion [57.1198884486401]
We propose an encoder-based approach for personalization.
We learn an identity encoder which can extract an identity representation from a set of reference images of a subject.
We show that our approach consistently outperforms existing fine-tuning based approach in both image generation and reconstruction.
arXiv Detail & Related papers (2023-04-14T23:32:24Z) - MetaPortrait: Identity-Preserving Talking Head Generation with Fast
Personalized Adaptation [57.060828009199646]
We propose an ID-preserving talking head generation framework.
We claim that dense landmarks are crucial to achieving accurate geometry-aware flow fields.
We adaptively fuse the source identity during synthesis, so that the network better preserves the key characteristics of the image portrait.
arXiv Detail & Related papers (2022-12-15T18:59:33Z) - Learning Dual Memory Dictionaries for Blind Face Restoration [75.66195723349512]
Recent works mainly treat the two aspects, i.e., generic and specific restoration, separately.
This paper suggests a DMDNet by explicitly memorizing the generic and specific features through dual dictionaries.
A new high-quality dataset, termed CelebRef-HQ, is constructed to promote the exploration of specific face restoration in the high-resolution space.
arXiv Detail & Related papers (2022-10-15T01:55:41Z) - MyStyle: A Personalized Generative Prior [38.3436972491162]
We introduce MyStyle, a personalized deep generative prior trained with a few shots of an individual.
MyStyle allows to reconstruct, enhance and edit images of a specific person.
arXiv Detail & Related papers (2022-03-31T17:59:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.