DSRN: an Efficient Deep Network for Image Relighting
- URL: http://arxiv.org/abs/2102.09242v1
- Date: Thu, 18 Feb 2021 09:54:15 GMT
- Title: DSRN: an Efficient Deep Network for Image Relighting
- Authors: Sourya Dipta Das, Nisarg A. Shah, Saikat Dutta, Himanshu Kumar
- Abstract summary: Deep image relighting allows automatic photo enhancement by illumination-specific retouching.
In this paper, we propose an efficient, real-time framework Deep Stacked Relighting Network (DSRN) for image relighting.
Our model is very lightweight with total size of about 42 MB and has an average inference time of about 0.0116s for image of resolution $1024 times 1024$.
- Score: 8.346635942881722
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Custom and natural lighting conditions can be emulated in images of the scene
during post-editing. Extraordinary capabilities of the deep learning framework
can be utilized for such purpose. Deep image relighting allows automatic photo
enhancement by illumination-specific retouching. Most of the state-of-the-art
methods for relighting are run-time intensive and memory inefficient. In this
paper, we propose an efficient, real-time framework Deep Stacked Relighting
Network (DSRN) for image relighting by utilizing the aggregated features from
input image at different scales. Our model is very lightweight with total size
of about 42 MB and has an average inference time of about 0.0116s for image of
resolution $1024 \times 1024$ which is faster as compared to other multi-scale
models. Our solution is quite robust for translating image color temperature
from input image to target image and also performs moderately for light
gradient generation with respect to the target image. Additionally, we show
that if images illuminated from opposite directions are used as input, the
qualitative results improve over using a single input image.
Related papers
- You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - NoiSER: Noise is All You Need for Enhancing Low-Light Images Without
Task-Related Data [103.04999391668753]
We show that it is possible to enhance a low-light image without any task-related training data.
Technically, we propose a new, magical, effective and efficient method, termed underlineNoise underlineSElf-underlineRegression (NoiSER)
Our NoiSER is highly competitive to current task-related data based LLIE models in terms of quantitative and visual results.
arXiv Detail & Related papers (2022-11-09T06:18:18Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Controllable Image Enhancement [66.18525728881711]
We present a semiautomatic image enhancement algorithm that can generate high-quality images with multiple styles by controlling a few parameters.
An encoder-decoder framework encodes the retouching skills into latent codes and decodes them into the parameters of image signal processing functions.
arXiv Detail & Related papers (2022-06-16T23:54:53Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - MSR-Net: Multi-Scale Relighting Network for One-to-One Relighting [6.544716087553996]
Deep image relighting allows photo enhancement by illumination-specific retouching without human effort.
Most of the existing popular methods available for relighting are run-time intensive and memory inefficient.
We propose the use of Stacked Deep Multi-Scale Hierarchical Network, which aggregates features from each image at different scales.
arXiv Detail & Related papers (2021-07-13T14:25:05Z) - Shed Various Lights on a Low-Light Image: Multi-Level Enhancement Guided
by Arbitrary References [17.59529931863947]
This paper proposes a neural network for multi-level low-light image enhancement.
Inspired by style transfer, our method decomposes an image into two low-coupling feature components in the latent space.
In such a way, the network learns to extract scene-invariant and brightness-specific information from a set of image pairs.
arXiv Detail & Related papers (2021-01-04T07:38:51Z) - WDRN : A Wavelet Decomposed RelightNet for Image Relighting [6.731863717520707]
We propose a wavelet decomposed RelightNet called WDRN which is a novel encoder-decoder network employing wavelet based decomposition.
We also propose a novel loss function called gray loss that ensures efficient learning of gradient in illumination along different directions of the ground truth image.
arXiv Detail & Related papers (2020-09-14T18:23:10Z) - Towards Geometry Guided Neural Relighting with Flash Photography [26.511476565209026]
We propose a framework for image relighting from a single flash photograph with its corresponding depth map using deep learning.
We experimentally validate the advantage of our geometry guided approach over state-of-the-art image-based approaches in intrinsic image decomposition and image relighting.
arXiv Detail & Related papers (2020-08-12T08:03:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.