PIE-Net: Photometric Invariant Edge Guided Network for Intrinsic Image
Decomposition
- URL: http://arxiv.org/abs/2203.16670v1
- Date: Wed, 30 Mar 2022 20:46:15 GMT
- Title: PIE-Net: Photometric Invariant Edge Guided Network for Intrinsic Image
Decomposition
- Authors: Partha Das, Sezer Karaoglu, Theo Gevers
- Abstract summary: Intrinsic image decomposition is the process of recovering the image formation components (reflectance and shading) from an image.
In this paper, an end-to-end edge-driven hybrid CNN approach is proposed for intrinsic image decomposition.
- Score: 17.008724191799313
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intrinsic image decomposition is the process of recovering the image
formation components (reflectance and shading) from an image. Previous methods
employ either explicit priors to constrain the problem or implicit constraints
as formulated by their losses (deep learning). These methods can be negatively
influenced by strong illumination conditions causing shading-reflectance
leakages.
Therefore, in this paper, an end-to-end edge-driven hybrid CNN approach is
proposed for intrinsic image decomposition. Edges correspond to illumination
invariant gradients. To handle hard negative illumination transitions, a
hierarchical approach is taken including global and local refinement layers. We
make use of attention layers to further strengthen the learning process.
An extensive ablation study and large scale experiments are conducted showing
that it is beneficial for edge-driven hybrid IID networks to make use of
illumination invariant descriptors and that separating global and local cues
helps in improving the performance of the network. Finally, it is shown that
the proposed method obtains state of the art performance and is able to
generalise well to real world images. The project page with pretrained models,
finetuned models and network code can be found at
https://ivi.fnwi.uva.nl/cv/pienet/.
Related papers
- ClassLIE: Structure- and Illumination-Adaptive Classification for
Low-Light Image Enhancement [17.51201873607536]
This paper proposes a novel framework, called ClassLIE, that combines the potential of CNNs and transformers.
It classifies and adaptively learns the structural and illumination information from the low-light images in a holistic and regional manner.
Experiments on five benchmark datasets consistently show our ClassLIE achieves new state-of-the-art performance.
arXiv Detail & Related papers (2023-12-20T18:43:20Z) - Retinexformer: One-stage Retinex-based Transformer for Low-light Image
Enhancement [96.09255345336639]
We formulate a principled One-stage Retinex-based Framework (ORF) to enhance low-light images.
ORF first estimates the illumination information to light up the low-light image and then restores the corruption to produce the enhanced image.
Our algorithm, Retinexformer, significantly outperforms state-of-the-art methods on thirteen benchmarks.
arXiv Detail & Related papers (2023-03-12T16:54:08Z) - Parallax-Tolerant Unsupervised Deep Image Stitching [57.76737888499145]
We propose UDIS++, a parallax-tolerant unsupervised deep image stitching technique.
First, we propose a robust and flexible warp to model the image registration from global homography to local thin-plate spline motion.
To further eliminate the parallax artifacts, we propose to composite the stitched image seamlessly by unsupervised learning for seam-driven composition masks.
arXiv Detail & Related papers (2023-02-16T10:40:55Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Low-light Image Enhancement by Retinex Based Algorithm Unrolling and
Adjustment [50.13230641857892]
We propose a new deep learning framework for the low-light image enhancement (LIE) problem.
The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity.
Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.
arXiv Detail & Related papers (2022-02-12T03:59:38Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Physically Inspired Dense Fusion Networks for Relighting [45.66699760138863]
We propose a model which enriches neural networks with physical insight.
Our method generates the relighted image with new illumination settings via two different strategies.
We show that our proposal can outperform many state-of-the-art methods in terms of well-known fidelity metrics and perceptual loss.
arXiv Detail & Related papers (2021-05-05T17:33:45Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Deep Gradient Projection Networks for Pan-sharpening [20.929492740317915]
This paper develops a model-based deep pan-sharpening approach.
By stacking the two blocks, a novel network, called gradient projection based pan-sharpening neural network, is constructed.
The experimental results on different kinds of satellite datasets demonstrate that the new network outperforms state-of-the-art methods both visually and quantitatively.
arXiv Detail & Related papers (2021-03-08T07:51:58Z) - LEUGAN:Low-Light Image Enhancement by Unsupervised Generative
Attentional Networks [4.584570928928926]
We propose an unsupervised generation network with attention-guidance to handle the low-light image enhancement task.
Specifically, our network contains two parts: an edge auxiliary module that restores sharper edges and an attention guidance module that recovers more realistic colors.
Experiments validate that our proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-12-24T16:49:19Z) - Physics-based Shading Reconstruction for Intrinsic Image Decomposition [20.44458250060927]
We propose albedo and shading gradient descriptors which are derived from physics-based models.
An initial sparse shading map is calculated directly from the corresponding RGB image gradients in a learning-free unsupervised manner.
An optimization method is proposed to reconstruct the full dense shading map.
We are the first to directly address the texture and intensity ambiguity problems of the shading estimations.
arXiv Detail & Related papers (2020-09-03T09:30:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.