TransferI2I: Transfer Learning for Image-to-Image Translation from Small
Datasets
- URL: http://arxiv.org/abs/2105.06219v2
- Date: Fri, 14 May 2021 07:14:12 GMT
- Title: TransferI2I: Transfer Learning for Image-to-Image Translation from Small
Datasets
- Authors: Yaxing Wang, Hector Laria Mantecon, Joost van de Weijer, Laura
Lopez-Fuentes, Bogdan Raducanu
- Abstract summary: Image-to-image (I2I) translation has matured in recent years and is able to generate high-quality realistic images.
Existing methods use transfer learning for I2I translation, but they still require the learning of millions of parameters from scratch.
We propose a new transfer learning for I2I translation (TransferI2I)
- Score: 35.84311497205075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-to-image (I2I) translation has matured in recent years and is able to
generate high-quality realistic images. However, despite current success, it
still faces important challenges when applied to small domains. Existing
methods use transfer learning for I2I translation, but they still require the
learning of millions of parameters from scratch. This drawback severely limits
its application on small domains. In this paper, we propose a new transfer
learning for I2I translation (TransferI2I). We decouple our learning process
into the image generation step and the I2I translation step. In the first step
we propose two novel techniques: source-target initialization and
self-initialization of the adaptor layer. The former finetunes the pretrained
generative model (e.g., StyleGAN) on source and target data. The latter allows
to initialize all non-pretrained network parameters without the need of any
data. These techniques provide a better initialization for the I2I translation
step. In addition, we introduce an auxiliary GAN that further facilitates the
training of deep I2I systems even from small datasets. In extensive experiments
on three datasets, (Animal faces, Birds, and Foods), we show that we outperform
existing methods and that mFID improves on several datasets with over 25
points.
Related papers
- Lost in Translation: Modern Neural Networks Still Struggle With Small Realistic Image Transformations [8.248839892711478]
Deep neural networks that achieve remarkable performance in image classification can be easily fooled by tiny transformations.
We show that these approaches still fall short in robustly handling 'natural' image translations that simulate a subtle change in camera orientation.
We present Robust Inference by Crop Selection: a simple method that can be proven to achieve any desired level of consistency.
arXiv Detail & Related papers (2024-04-10T16:39:50Z) - CycleNet: Rethinking Cycle Consistency in Text-Guided Diffusion for
Image Manipulation [57.836686457542385]
Diffusion models (DMs) have enabled breakthroughs in image synthesis tasks but lack an intuitive interface for consistent image-to-image (I2I) translation.
This paper introduces Cyclenet, a novel but simple method that incorporates cycle consistency into DMs to regularize image manipulation.
arXiv Detail & Related papers (2023-10-19T21:32:21Z) - Fine-Tuning StyleGAN2 For Cartoon Face Generation [0.0]
We propose a novel image-to-image translation method that generates images of the target domain by finetuning a stylegan2 pretrained model.
The stylegan2 model is suitable for unsupervised I2I translation on unbalanced datasets.
arXiv Detail & Related papers (2021-06-22T14:00:10Z) - DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by
Transferring from GANs [43.33066765114446]
Image-to-image translation suffers from inferior performance when translations between classes require large shape changes.
We propose a novel deep hierarchical Image-to-Image Translation method, called DeepI2I.
We demonstrate that transfer learning significantly improves the performance of I2I systems, especially for small datasets.
arXiv Detail & Related papers (2020-11-11T16:03:03Z) - Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2
Network [73.5062435623908]
We propose a new I2I translation method that generates a new model in the target domain via a series of model transformations.
By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain.
arXiv Detail & Related papers (2020-10-12T13:51:40Z) - TuiGAN: Learning Versatile Image-to-Image Translation with Two Unpaired
Images [102.4003329297039]
An unsupervised image-to-image translation (UI2I) task deals with learning a mapping between two domains without paired images.
We propose TuiGAN, a generative model that is trained on only two unpaired images and amounts to one-shot unsupervised learning.
arXiv Detail & Related papers (2020-04-09T16:23:59Z) - Semi-supervised Learning for Few-shot Image-to-Image Translation [89.48165936436183]
We propose a semi-supervised method for few-shot image translation, called SEMIT.
Our method achieves excellent results on four different datasets using as little as 10% of the source labels.
arXiv Detail & Related papers (2020-03-30T22:46:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.