Unsupervised Image Transformation Learning via Generative Adversarial
Networks
- URL: http://arxiv.org/abs/2103.07751v1
- Date: Sat, 13 Mar 2021 17:08:19 GMT
- Title: Unsupervised Image Transformation Learning via Generative Adversarial
Networks
- Authors: Kaiwen Zha, Yujun Shen, Bolei Zhou
- Abstract summary: We study the image transformation problem by learning the underlying transformations from a collection of images using Generative Adversarial Networks (GANs)
We propose an unsupervised learning framework, termed as TrGAN, to project images onto a transformation space that is shared by the generator and the discriminator.
- Score: 40.84518581293321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we study the image transformation problem by learning the
underlying transformations from a collection of images using Generative
Adversarial Networks (GANs). Specifically, we propose an unsupervised learning
framework, termed as TrGAN, to project images onto a transformation space that
is shared by the generator and the discriminator. Any two points in this
projected space define a transformation that can guide the image generation
process, leading to continuous semantic change. By projecting a pair of images
onto the transformation space, we are able to adequately extract the semantic
variation between them and further apply the extracted semantic to facilitating
image editing, including not only transferring image styles (e.g., changing day
to night) but also manipulating image contents (e.g., adding clouds in the
sky). Code and models are available at https://genforce.github.io/trgan.
Related papers
- Making Images from Images: Interleaving Denoising and Transformation [5.776000002820102]
We learn not only the content of the images, but also the parameterized transformations required to transform the desired images into each other.
By learning the image transforms, we allow any source image to be pre-specified.
Unlike previous methods, increasing the number of regions actually makes the problem easier and improves results.
arXiv Detail & Related papers (2024-11-24T17:13:11Z) - Gradient Adjusting Networks for Domain Inversion [82.72289618025084]
StyleGAN2 was demonstrated to be a powerful image generation engine that supports semantic editing.
We present a per-image optimization method that tunes a StyleGAN2 generator such that it achieves a local edit to the generator's weights.
Our experiments show a sizable gap in performance over the current state of the art in this very active domain.
arXiv Detail & Related papers (2023-02-22T14:47:57Z) - Review Neural Networks about Image Transformation Based on IGC Learning
Framework with Annotated Information [13.317099281011515]
In Computer Vision (CV), many problems can be regarded as the image transformation task, e.g., semantic segmentation and style transfer.
Some surveys only review the research on style transfer or image-to-image translation, all of which are just a branch of image transformation.
This paper proposes a novel learning framework including Independent learning, Guided learning, and Cooperative learning.
arXiv Detail & Related papers (2022-06-21T07:27:47Z) - FlexIT: Towards Flexible Semantic Image Translation [59.09398209706869]
We propose FlexIT, a novel method which can take any input image and a user-defined text instruction for editing.
First, FlexIT combines the input image and text into a single target point in the CLIP multimodal embedding space.
We iteratively transform the input image toward the target point, ensuring coherence and quality with a variety of novel regularization terms.
arXiv Detail & Related papers (2022-03-09T13:34:38Z) - Linear Semantics in Generative Adversarial Networks [26.123252503846942]
We aim to better understand the semantic representation of GANs, and enable semantic control in GAN's generation process.
We find that a well-trained GAN encodes image semantics in its internal feature maps in a surprisingly simple way.
We propose two few-shot image editing approaches, namely Semantic-Conditional Sampling and Semantic Image Editing.
arXiv Detail & Related papers (2021-04-01T14:18:48Z) - Image-to-image Mapping with Many Domains by Sparse Attribute Transfer [71.28847881318013]
Unsupervised image-to-image translation consists of learning a pair of mappings between two domains without known pairwise correspondences between points.
Current convention is to approach this task with cycle-consistent GANs.
We propose an alternate approach that directly restricts the generator to performing a simple sparse transformation in a latent layer.
arXiv Detail & Related papers (2020-06-23T19:52:23Z) - Semantic Image Manipulation Using Scene Graphs [105.03614132953285]
We introduce a-semantic scene graph network that does not require direct supervision for constellation changes or image edits.
This makes possible to train the system from existing real-world datasets with no additional annotation effort.
arXiv Detail & Related papers (2020-04-07T20:02:49Z) - In-Domain GAN Inversion for Real Image Editing [56.924323432048304]
A common practice of feeding a real image to a trained GAN generator is to invert it back to a latent code.
Existing inversion methods typically focus on reconstructing the target image by pixel values yet fail to land the inverted code in the semantic domain of the original latent space.
We propose an in-domain GAN inversion approach, which faithfully reconstructs the input image and ensures the inverted code to be semantically meaningful for editing.
arXiv Detail & Related papers (2020-03-31T18:20:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.