Stay Positive: Non-Negative Image Synthesis for Augmented Reality
- URL: http://arxiv.org/abs/2202.00659v1
- Date: Tue, 1 Feb 2022 18:55:11 GMT
- Title: Stay Positive: Non-Negative Image Synthesis for Augmented Reality
- Authors: Katie Luo, Guandao Yang, Wenqi Xian, Harald Haraldsson, Bharath
Hariharan, Serge Belongie
- Abstract summary: In applications such as optical see-through and projector augmented reality, producing images amounts to solving non-negative image generation.
We know, however, that the human visual system can be fooled by optical illusions involving certain spatial configurations of brightness and contrast.
We propose a novel optimization procedure to produce images that satisfy both semantic and non-negativity constraints.
- Score: 39.930627591187104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In applications such as optical see-through and projector augmented reality,
producing images amounts to solving non-negative image generation, where one
can only add light to an existing image. Most image generation methods,
however, are ill-suited to this problem setting, as they make the assumption
that one can assign arbitrary color to each pixel. In fact, naive application
of existing methods fails even in simple domains such as MNIST digits, since
one cannot create darker pixels by adding light. We know, however, that the
human visual system can be fooled by optical illusions involving certain
spatial configurations of brightness and contrast. Our key insight is that one
can leverage this behavior to produce high quality images with negligible
artifacts. For example, we can create the illusion of darker patches by
brightening surrounding pixels. We propose a novel optimization procedure to
produce images that satisfy both semantic and non-negativity constraints. Our
approach can incorporate existing state-of-the-art methods, and exhibits strong
performance in a variety of tasks including image-to-image translation and
style transfer.
Related papers
- Making Images from Images: Interleaving Denoising and Transformation [5.776000002820102]
We learn not only the content of the images, but also the parameterized transformations required to transform the desired images into each other.
By learning the image transforms, we allow any source image to be pre-specified.
Unlike previous methods, increasing the number of regions actually makes the problem easier and improves results.
arXiv Detail & Related papers (2024-11-24T17:13:11Z) - Pixel-Wise Color Constancy via Smoothness Techniques in Multi-Illuminant
Scenes [16.176896461798993]
We propose a novel multi-illuminant color constancy method, by learning pixel-wise illumination maps caused by multiple light sources.
The proposed method enforces smoothness within neighboring pixels, by regularizing the training with the total variation loss.
A bilateral filter is provisioned further to enhance the natural appearance of the estimated images, while preserving the edges.
arXiv Detail & Related papers (2024-02-05T11:42:19Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Residual Aligned: Gradient Optimization for Non-Negative Image Synthesis [11.026337830218067]
We propose a method that is able to preserve lightness constancy at a local level, thus capturing high frequency details.
Compared with existing work, our method shows strong performance in image-to-image translation tasks.
arXiv Detail & Related papers (2022-02-08T18:04:32Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Intrinsic Image Transfer for Illumination Manipulation [1.2387676601792899]
This paper presents a novel intrinsic image transfer (IIT) algorithm for illumination manipulation.
It creates a local image translation between two illumination surfaces.
We illustrate that all losses can be reduced without the necessity of taking an intrinsic image decomposition.
arXiv Detail & Related papers (2021-07-01T19:12:24Z) - In&Out : Diverse Image Outpainting via GAN Inversion [89.84841983778672]
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content.
In this work, we formulate the problem from the perspective of inverting generative adversarial networks.
Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image.
arXiv Detail & Related papers (2021-04-01T17:59:10Z) - Shape, Illumination, and Reflectance from Shading [86.71603503678216]
A fundamental problem in computer vision is that of inferring the intrinsic, 3D structure of the world from flat, 2D images.
We find that certain explanations are more likely than others: surfaces tend to be smooth, paint tends to be uniform, and illumination tends to be natural.
Our technique can be viewed as a superset of several classic computer vision problems.
arXiv Detail & Related papers (2020-10-07T18:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.