Physically Inspired Dense Fusion Networks for Relighting
- URL: http://arxiv.org/abs/2105.02209v1
- Date: Wed, 5 May 2021 17:33:45 GMT
- Title: Physically Inspired Dense Fusion Networks for Relighting
- Authors: Amirsaeed Yazdani, Tiantong Guo, Vishal Monga
- Abstract summary: We propose a model which enriches neural networks with physical insight.
Our method generates the relighted image with new illumination settings via two different strategies.
We show that our proposal can outperform many state-of-the-art methods in terms of well-known fidelity metrics and perceptual loss.
- Score: 45.66699760138863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image relighting has emerged as a problem of significant research interest
inspired by augmented reality applications. Physics-based traditional methods,
as well as black box deep learning models, have been developed. The existing
deep networks have exploited training to achieve a new state of the art;
however, they may perform poorly when training is limited or does not represent
problem phenomenology, such as the addition or removal of dense shadows. We
propose a model which enriches neural networks with physical insight. More
precisely, our method generates the relighted image with new illumination
settings via two different strategies and subsequently fuses them using a
weight map (w). In the first strategy, our model predicts the material
reflectance parameters (albedo) and illumination/geometry parameters of the
scene (shading) for the relit image (we refer to this strategy as intrinsic
image decomposition (IID)). The second strategy is solely based on the black
box approach, where the model optimizes its weights based on the ground-truth
images and the loss terms in the training stage and generates the relit output
directly (we refer to this strategy as direct). While our proposed method
applies to both one-to-one and any-to-any relighting problems, for each case we
introduce problem-specific components that enrich the model performance: 1) For
one-to-one relighting we incorporate normal vectors of the surfaces in the
scene to adjust gloss and shadows accordingly in the image. 2) For any-to-any
relighting, we propose an additional multiscale block to the architecture to
enhance feature extraction. Experimental results on the VIDIT 2020 and the
VIDIT 2021 dataset (used in the NTIRE 2021 relighting challenge) reveals that
our proposal can outperform many state-of-the-art methods in terms of
well-known fidelity metrics and perceptual loss.
Related papers
- Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Geometry-aware Single-image Full-body Human Relighting [37.381122678376805]
Single-image human relighting aims to relight a target human under new lighting conditions by decomposing the input image into albedo, shape and lighting.
Previous methods suffer from both the entanglement between albedo and lighting and the lack of hard shadows.
Our framework is able to generate photo-realistic high-frequency shadows such as cast shadows under challenging lighting conditions.
arXiv Detail & Related papers (2022-07-11T10:21:02Z) - PIE-Net: Photometric Invariant Edge Guided Network for Intrinsic Image
Decomposition [17.008724191799313]
Intrinsic image decomposition is the process of recovering the image formation components (reflectance and shading) from an image.
In this paper, an end-to-end edge-driven hybrid CNN approach is proposed for intrinsic image decomposition.
arXiv Detail & Related papers (2022-03-30T20:46:15Z) - Low-light Image Enhancement by Retinex Based Algorithm Unrolling and
Adjustment [50.13230641857892]
We propose a new deep learning framework for the low-light image enhancement (LIE) problem.
The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity.
Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.
arXiv Detail & Related papers (2022-02-12T03:59:38Z) - An Optical physics inspired CNN approach for intrinsic image
decomposition [0.0]
Intrinsic Image Decomposition is an open problem of generating the constituents of an image.
We propose a neural network architecture capable of this decomposition using physics-based parameters derived from the image.
arXiv Detail & Related papers (2021-05-21T00:54:01Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.