Adverse Weather Image Translation with Asymmetric and Uncertainty-aware
GAN
- URL: http://arxiv.org/abs/2112.04283v1
- Date: Wed, 8 Dec 2021 13:41:24 GMT
- Title: Adverse Weather Image Translation with Asymmetric and Uncertainty-aware
GAN
- Authors: Jeong-gi Kwak, Youngsaeng Jin, Yuanming Li, Dongsik Yoon, Donghyeon
Kim, Hanseok Ko
- Abstract summary: Adverse weather image translation belongs to the unsupervised image-to-image (I2I) translation task.
Geneversarative Adrial Networks (GANs) have achieved notable success in I2I translation.
We propose a novel GAN model, i.e., AU-GAN, which has an asymmetric architecture for adverse domain translation.
- Score: 16.80284837186338
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adverse weather image translation belongs to the unsupervised image-to-image
(I2I) translation task which aims to transfer adverse condition domain (eg,
rainy night) to standard domain (eg, day). It is a challenging task because
images from adverse domains have some artifacts and insufficient information.
Recently, many studies employing Generative Adversarial Networks (GANs) have
achieved notable success in I2I translation but there are still limitations in
applying them to adverse weather enhancement. Symmetric architecture based on
bidirectional cycle-consistency loss is adopted as a standard framework for
unsupervised domain transfer methods. However, it can lead to inferior
translation result if the two domains have imbalanced information. To address
this issue, we propose a novel GAN model, i.e., AU-GAN, which has an asymmetric
architecture for adverse domain translation. We insert a proposed feature
transfer network (${T}$-net) in only a normal domain generator (i.e., rainy
night-> day) to enhance encoded features of the adverse domain image. In
addition, we introduce asymmetric feature matching for disentanglement of
encoded features. Finally, we propose uncertainty-aware cycle-consistency loss
to address the regional uncertainty of a cyclic reconstructed image. We
demonstrate the effectiveness of our method by qualitative and quantitative
comparisons with state-of-the-art models. Codes are available at
https://github.com/jgkwak95/AU-GAN.
Related papers
- SyntStereo2Real: Edge-Aware GAN for Remote Sensing Image-to-Image Translation while Maintaining Stereo Constraint [1.8749305679160366]
Current methods involve combining two networks, an unpaired image-to-image translation network and a stereo-matching network.
We propose an edge-aware GAN-based network that effectively tackles both tasks simultaneously.
We demonstrate that our model produces qualitatively and quantitatively superior results than existing models, and its applicability extends to diverse domains.
arXiv Detail & Related papers (2024-04-14T14:58:52Z) - Smooth image-to-image translations with latent space interpolations [64.8170758294427]
Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain.
We show that our regularization techniques can improve the state-of-the-art I2I translations by a large margin.
arXiv Detail & Related papers (2022-10-03T11:57:30Z) - Leveraging in-domain supervision for unsupervised image-to-image
translation tasks via multi-stream generators [4.726777092009554]
We introduce two techniques to incorporate this invaluable in-domain prior knowledge for the benefit of translation quality.
We propose splitting the input data according to semantic masks, explicitly guiding the network to different behavior for the different regions of the image.
In addition, we propose training a semantic segmentation network along with the translation task, and to leverage this output as a loss term that improves robustness.
arXiv Detail & Related papers (2021-12-30T15:29:36Z) - Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2
Network [73.5062435623908]
We propose a new I2I translation method that generates a new model in the target domain via a series of model transformations.
By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain.
arXiv Detail & Related papers (2020-10-12T13:51:40Z) - BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal
Transfer [53.79505340315916]
We introduce BalaGAN, specifically designed to tackle the domain imbalance problem.
We leverage the latent modalities of the richer domain to turn the image-to-image translation problem into a balanced, multi-class, and conditional translation problem.
We show that BalaGAN outperforms strong baselines of both unconditioned and style-transfer-based image-to-image translation methods.
arXiv Detail & Related papers (2020-10-05T14:16:41Z) - MI^2GAN: Generative Adversarial Network for Medical Image Domain
Adaptation using Mutual Information Constraint [47.07869311690419]
We propose a novel GAN to maintain image-contents during cross-domain I2I translation.
Particularly, we disentangle the content features from domain information for both the source and translated images.
The proposed MI$2$GAN is evaluated on two tasks---polyp segmentation using colonoscopic images and the segmentation of optic disc and cup in fundus images.
arXiv Detail & Related papers (2020-07-22T03:19:54Z) - Structured Domain Adaptation with Online Relation Regularization for
Unsupervised Person Re-ID [62.90727103061876]
Unsupervised domain adaptation (UDA) aims at adapting the model trained on a labeled source-domain dataset to an unlabeled target-domain dataset.
We propose an end-to-end structured domain adaptation framework with an online relation-consistency regularization term.
Our proposed framework is shown to achieve state-of-the-art performance on multiple UDA tasks of person re-ID.
arXiv Detail & Related papers (2020-03-14T14:45:18Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z) - Asymmetric GANs for Image-to-Image Translation [62.49892218126542]
Existing models for Generative Adversarial Networks (GANs) learn the mapping from the source domain to the target domain using a cycle-consistency loss.
We propose an AsymmetricGAN model with both translation and reconstruction generators of unequal sizes and different parameter-sharing strategy.
Experiments on both supervised and unsupervised generative tasks with 8 datasets show that AsymmetricGAN achieves superior model capacity and better generation performance.
arXiv Detail & Related papers (2019-12-14T21:24:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.