Intrinsic Image Transfer for Illumination Manipulation
- URL: http://arxiv.org/abs/2107.00704v1
- Date: Thu, 1 Jul 2021 19:12:24 GMT
- Title: Intrinsic Image Transfer for Illumination Manipulation
- Authors: Junqing Huang, Michael Ruzhansky, Qianying Zhang, Haihui Wang
- Abstract summary: This paper presents a novel intrinsic image transfer (IIT) algorithm for illumination manipulation.
It creates a local image translation between two illumination surfaces.
We illustrate that all losses can be reduced without the necessity of taking an intrinsic image decomposition.
- Score: 1.2387676601792899
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a novel intrinsic image transfer (IIT) algorithm for
illumination manipulation, which creates a local image translation between two
illumination surfaces. This model is built on an optimization-based framework
consisting of three photo-realistic losses defined on the sub-layers factorized
by an intrinsic image decomposition. We illustrate that all losses can be
reduced without the necessity of taking an intrinsic image decomposition under
the well-known spatial-varying illumination illumination-invariant reflectance
prior knowledge. Moreover, with a series of relaxations, all of them can be
directly defined on images, giving a closed-form solution for image
illumination manipulation. This new paradigm differs from the prevailing
Retinex-based algorithms, as it provides an implicit way to deal with the
per-pixel image illumination. We finally demonstrate its versatility and
benefits to the illumination-related tasks such as illumination compensation,
image enhancement, and high dynamic range (HDR) image compression, and show the
high-quality results on natural image datasets.
Related papers
- Photometric Inverse Rendering: Shading Cues Modeling and Surface Reflectance Regularization [46.146783750386994]
We propose a new method for neural inverse rendering.
Our method jointly optimize the light source position to account for the self-shadows in images.
To enhance surface reflectance decomposition, we introduce a new regularization.
arXiv Detail & Related papers (2024-08-13T11:39:14Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Pixel-Wise Color Constancy via Smoothness Techniques in Multi-Illuminant
Scenes [16.176896461798993]
We propose a novel multi-illuminant color constancy method, by learning pixel-wise illumination maps caused by multiple light sources.
The proposed method enforces smoothness within neighboring pixels, by regularizing the training with the total variation loss.
A bilateral filter is provisioned further to enhance the natural appearance of the estimated images, while preserving the edges.
arXiv Detail & Related papers (2024-02-05T11:42:19Z) - Reti-Diff: Illumination Degradation Image Restoration with Retinex-based
Latent Diffusion Model [59.08821399652483]
Illumination degradation image restoration (IDIR) techniques aim to improve the visibility of degraded images and mitigate the adverse effects of deteriorated illumination.
Among these algorithms, diffusion model (DM)-based methods have shown promising performance but are often burdened by heavy computational demands and pixel misalignment issues when predicting the image-level distribution.
We propose to leverage DM within a compact latent space to generate concise guidance priors and introduce a novel solution called Reti-Diff for the IDIR task.
Reti-Diff comprises two key components: the Retinex-based latent DM (RLDM) and the Retinex-guided transformer (RG
arXiv Detail & Related papers (2023-11-20T09:55:06Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Deep Quantigraphic Image Enhancement via Comparametric Equations [15.782217616496055]
We propose a novel trainable module that diversifies the conversion from the low-light image and illumination map to the enhanced image.
Our method improves the flexibility of deep image enhancement, limits the computational burden to illumination estimation, and allows for fully unsupervised learning adaptable to the diverse demands of different tasks.
arXiv Detail & Related papers (2023-04-05T08:14:41Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - WDRN : A Wavelet Decomposed RelightNet for Image Relighting [6.731863717520707]
We propose a wavelet decomposed RelightNet called WDRN which is a novel encoder-decoder network employing wavelet based decomposition.
We also propose a novel loss function called gray loss that ensures efficient learning of gradient in illumination along different directions of the ground truth image.
arXiv Detail & Related papers (2020-09-14T18:23:10Z) - Learning Flow-based Feature Warping for Face Frontalization with
Illumination Inconsistent Supervision [73.18554605744842]
Flow-based Feature Warping Model (FFWM) learns to synthesize photo-realistic and illumination preserving frontal images.
An Illumination Preserving Module (IPM) is proposed to learn illumination preserving image synthesis.
A Warp Attention Module (WAM) is introduced to reduce the pose discrepancy in the feature level.
arXiv Detail & Related papers (2020-08-16T06:07:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.