Enhance Images as You Like with Unpaired Learning
- URL: http://arxiv.org/abs/2110.01161v1
- Date: Mon, 4 Oct 2021 03:00:44 GMT
- Title: Enhance Images as You Like with Unpaired Learning
- Authors: Xiaopeng Sun, Muxingzi Li, Tianyu He, Lubin Fan
- Abstract summary: We propose a lightweight one-path conditional generative adversarial network (cGAN) to learn a one-to-many relation from low-light to normal-light image space.
Our network learns to generate a collection of enhanced images from a given input conditioned on various reference images.
Our model achieves competitive visual and quantitative results on par with fully supervised methods on both noisy and clean datasets.
- Score: 8.104571453311442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement exhibits an ill-posed nature, as a given image
may have many enhanced versions, yet recent studies focus on building a
deterministic mapping from input to an enhanced version. In contrast, we
propose a lightweight one-path conditional generative adversarial network
(cGAN) to learn a one-to-many relation from low-light to normal-light image
space, given only sets of low- and normal-light training images without any
correspondence. By formulating this ill-posed problem as a modulation code
learning task, our network learns to generate a collection of enhanced images
from a given input conditioned on various reference images. Therefore our
inference model easily adapts to various user preferences, provided with a few
favorable photos from each user. Our model achieves competitive visual and
quantitative results on par with fully supervised methods on both noisy and
clean datasets, while being 6 to 10 times lighter than state-of-the-art
generative adversarial networks (GANs) approaches.
Related papers
- A Simple Approach to Unifying Diffusion-based Conditional Generation [63.389616350290595]
We introduce a simple, unified framework to handle diverse conditional generation tasks.
Our approach enables versatile capabilities via different inference-time sampling schemes.
Our model supports additional capabilities like non-spatially aligned and coarse conditioning.
arXiv Detail & Related papers (2024-10-15T09:41:43Z) - CricaVPR: Cross-image Correlation-aware Representation Learning for Visual Place Recognition [73.51329037954866]
We propose a robust global representation method with cross-image correlation awareness for visual place recognition.
Our method uses the attention mechanism to correlate multiple images within a batch.
Our method outperforms state-of-the-art methods by a large margin with significantly less training time.
arXiv Detail & Related papers (2024-02-29T15:05:11Z) - Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Shed Various Lights on a Low-Light Image: Multi-Level Enhancement Guided
by Arbitrary References [17.59529931863947]
This paper proposes a neural network for multi-level low-light image enhancement.
Inspired by style transfer, our method decomposes an image into two low-coupling feature components in the latent space.
In such a way, the network learns to extract scene-invariant and brightness-specific information from a set of image pairs.
arXiv Detail & Related papers (2021-01-04T07:38:51Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Training End-to-end Single Image Generators without GANs [27.393821783237186]
AugurOne is a novel approach for training single image generative models.
Our approach trains an upscaling neural network using non-affine augmentations of the (single) input image.
A compact latent space is jointly learned allowing for controlled image synthesis.
arXiv Detail & Related papers (2020-04-07T17:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.