Residual Aligned: Gradient Optimization for Non-Negative Image Synthesis
- URL: http://arxiv.org/abs/2202.04036v1
- Date: Tue, 8 Feb 2022 18:04:32 GMT
- Title: Residual Aligned: Gradient Optimization for Non-Negative Image Synthesis
- Authors: Flora Yu Shen, Katie Luo, Guandao Yang, Harald Haraldsson, Serge
Belongie
- Abstract summary: We propose a method that is able to preserve lightness constancy at a local level, thus capturing high frequency details.
Compared with existing work, our method shows strong performance in image-to-image translation tasks.
- Score: 11.026337830218067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we address an important problem of optical see through (OST)
augmented reality: non-negative image synthesis. Most of the image generation
methods fail under this condition, since they assume full control over each
pixel and cannot create darker pixels by adding light. In order to solve the
non-negative image generation problem in AR image synthesis, prior works have
attempted to utilize optical illusion to simulate human vision but fail to
preserve lightness constancy well under situations such as high dynamic range.
In our paper, we instead propose a method that is able to preserve lightness
constancy at a local level, thus capturing high frequency details. Compared
with existing work, our method shows strong performance in image-to-image
translation tasks, particularly in scenarios such as large scale images, high
resolution images, and high dynamic range image transfer.
Related papers
- Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur [68.24599239479326]
We develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality, view-consistent images.
Our model surpasses state-of-the-art point-based methods for novel view synthesis.
arXiv Detail & Related papers (2023-04-25T08:36:33Z) - Stay Positive: Non-Negative Image Synthesis for Augmented Reality [39.930627591187104]
In applications such as optical see-through and projector augmented reality, producing images amounts to solving non-negative image generation.
We know, however, that the human visual system can be fooled by optical illusions involving certain spatial configurations of brightness and contrast.
We propose a novel optimization procedure to produce images that satisfy both semantic and non-negativity constraints.
arXiv Detail & Related papers (2022-02-01T18:55:11Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - USIS: Unsupervised Semantic Image Synthesis [9.613134538472801]
We propose a new Unsupervised paradigm for Semantic Image Synthesis (USIS)
USIS learns to output images with visually separable semantic classes using a self-supervised segmentation loss.
In order to match the color and texture distribution of real images without losing high-frequency information, we propose to use whole image wavelet-based discrimination.
arXiv Detail & Related papers (2021-09-29T20:48:41Z) - R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network [7.755223662467257]
We propose a novel Real-low to Real-normal Network for low-light image enhancement, dubbed R2RNet.
Unlike most previous methods trained on synthetic images, we collect the first Large-Scale Real-World paired low/normal-light images dataset.
Our method can properly improve the contrast and suppress noise simultaneously.
arXiv Detail & Related papers (2021-06-28T09:33:13Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Contextual colorization and denoising for low-light ultra high
resolution sequences [0.0]
Low-light image sequences generally suffer from incoherent noise, flicker and blurring of objects and moving objects.
We tackle these problems with an unpaired-learning method that offers simultaneous colorization and denoising.
We show that our method outperforms existing approaches in terms of subjective quality and that it is robust to variations in brightness levels and noise.
arXiv Detail & Related papers (2021-01-05T15:35:29Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z) - Multimodal Image Synthesis with Conditional Implicit Maximum Likelihood
Estimation [54.17177006826262]
We develop a new generic conditional image synthesis method based on Implicit Maximum Likelihood Estimation (IMLE)
We demonstrate improved multimodal image synthesis performance on two tasks, single image super-resolution and image synthesis from scene layouts.
arXiv Detail & Related papers (2020-04-07T03:06:55Z) - Reconstructing the Noise Manifold for Image Denoising [56.562855317536396]
We introduce the idea of a cGAN which explicitly leverages structure in the image noise space.
By learning directly a low dimensional manifold of the image noise, the generator promotes the removal from the noisy image only that information which spans this manifold.
Based on our experiments, our model substantially outperforms existing state-of-the-art architectures.
arXiv Detail & Related papers (2020-02-11T00:31:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.