Self-Distilled StyleGAN: Towards Generation from Internet Photos
- URL: http://arxiv.org/abs/2202.12211v1
- Date: Thu, 24 Feb 2022 17:16:47 GMT
- Title: Self-Distilled StyleGAN: Towards Generation from Internet Photos
- Authors: Ron Mokady, Michal Yarom, Omer Tov, Oran Lang, Daniel Cohen-Or, Tali
Dekel, Michal Irani, Inbar Mosseri
- Abstract summary: We show how StyleGAN can be adapted to work on raw uncurated images collected from the Internet.
We propose a StyleGAN-based self-distillation approach, which consists of two main components.
The presented technique enables the generation of high-quality images, while minimizing the loss in diversity of the data.
- Score: 47.28014076401117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: StyleGAN is known to produce high-fidelity images, while also offering
unprecedented semantic editing. However, these fascinating abilities have been
demonstrated only on a limited set of datasets, which are usually structurally
aligned and well curated. In this paper, we show how StyleGAN can be adapted to
work on raw uncurated images collected from the Internet. Such image
collections impose two main challenges to StyleGAN: they contain many outlier
images, and are characterized by a multi-modal distribution. Training StyleGAN
on such raw image collections results in degraded image synthesis quality. To
meet these challenges, we proposed a StyleGAN-based self-distillation approach,
which consists of two main components: (i) A generative-based self-filtering of
the dataset to eliminate outlier images, in order to generate an adequate
training set, and (ii) Perceptual clustering of the generated images to detect
the inherent data modalities, which are then employed to improve StyleGAN's
"truncation trick" in the image synthesis process. The presented technique
enables the generation of high-quality images, while minimizing the loss in
diversity of the data. Through qualitative and quantitative evaluation, we
demonstrate the power of our approach to new challenging and diverse domains
collected from the Internet. New datasets and pre-trained models are available
at https://self-distilled-stylegan.github.io/ .
Related papers
- ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized Text-to-Image Generation [49.997839600988875]
Existing personalization methods rely on finetuning a text-to-image foundation model on a user's custom dataset.
We propose Joint-Image Diffusion (jedi), an effective technique for learning a finetuning-free personalization model.
Our model achieves state-of-the-art generation quality, both quantitatively and qualitatively, significantly outperforming both the prior finetuning-based and finetuning-free personalization baselines.
arXiv Detail & Related papers (2024-07-08T17:59:02Z) - Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model [80.61157097223058]
A prevalent strategy to bolster image classification performance is through augmenting the training set with synthetic images generated by T2I models.
In this study, we scrutinize the shortcomings of both current generative and conventional data augmentation techniques.
We introduce an innovative inter-class data augmentation method known as Diff-Mix, which enriches the dataset by performing image translations between classes.
arXiv Detail & Related papers (2024-03-28T17:23:45Z) - Style-Extracting Diffusion Models for Semi-Supervised Histopathology Segmentation [6.479933058008389]
Style-Extracting Diffusion Models generate images with unseen characteristics beneficial for downstream tasks.
In this work, we show the capability of our method on a natural image dataset as a proof-of-concept.
We verify the added value of the generated images by showing improved segmentation results and lower performance variability between patients.
arXiv Detail & Related papers (2024-03-21T14:36:59Z) - Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets [35.11248114153497]
StyleGAN sets new standards for generative modeling regarding image quality and controllability.
Our final model, StyleGAN-XL, sets a new state-of-the-art on large-scale image synthesis and is the first to generate images at a resolution of $10242$ at such a dataset scale.
arXiv Detail & Related papers (2022-02-01T08:22:34Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Unlabeled Data Guided Semi-supervised Histopathology Image Segmentation [34.45302976822067]
Semi-supervised learning (SSL) based on generative methods has been proven to be effective in utilizing diverse image characteristics.
We propose a new data guided generative method for histopathology image segmentation by leveraging the unlabeled data distributions.
Our method is evaluated on glands and nuclei datasets.
arXiv Detail & Related papers (2020-12-17T02:54:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.