Multi-illuminant Color Constancy via Multi-scale Illuminant Estimation and Fusion
- URL: http://arxiv.org/abs/2502.02021v1
- Date: Tue, 04 Feb 2025 05:19:30 GMT
- Title: Multi-illuminant Color Constancy via Multi-scale Illuminant Estimation and Fusion
- Authors: Hang Luo, Rongwei Li, Jinxing Liang,
- Abstract summary: Existing methods mainly employ deep learning to establish a direct mapping between an image and its illumination map.
We represent an illuminant map as the linear combination of components estimated from multi-scale images.
We propose a tri-branch convolution networks to estimate multi-grained illuminant distribution maps from multi-scale images.
- Score: 0.7549179334992838
- License:
- Abstract: Multi-illuminant color constancy methods aim to eliminate local color casts within an image through pixel-wise illuminant estimation. Existing methods mainly employ deep learning to establish a direct mapping between an image and its illumination map, which neglects the impact of image scales. To alleviate this problem, we represent an illuminant map as the linear combination of components estimated from multi-scale images. Furthermore, we propose a tri-branch convolution networks to estimate multi-grained illuminant distribution maps from multi-scale images. These multi-grained illuminant maps are merged adaptively with an attentional illuminant fusion module. Through comprehensive experimental analysis and evaluation, the results demonstrate the effectiveness of our method, and it has achieved state-of-the-art performance.
Related papers
- IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and Illuminations [64.07859467542664]
Capturing geometric and material information from images remains a fundamental challenge in computer vision and graphics.
Traditional optimization-based methods often require hours of computational time to reconstruct geometry, material properties, and environmental lighting from dense multi-view inputs.
We introduce IDArb, a diffusion-based model designed to perform intrinsic decomposition on an arbitrary number of images under varying illuminations.
arXiv Detail & Related papers (2024-12-16T18:52:56Z) - Adaptive Stereo Depth Estimation with Multi-Spectral Images Across All Lighting Conditions [58.88917836512819]
We propose a novel framework incorporating stereo depth estimation to enforce accurate geometric constraints.
To mitigate the effects of poor lighting on stereo matching, we introduce Degradation Masking.
Our method achieves state-of-the-art (SOTA) performance on the Multi-Spectral Stereo (MS2) dataset.
arXiv Detail & Related papers (2024-11-06T03:30:46Z) - Self-Supervised Multi-Scale Network for Blind Image Deblurring via Alternating Optimization [12.082424048578753]
We present a self-supervised multi-scale blind image deblurring method to jointly estimate the latent image and the blur kernel.
Thanks to the collaborative estimation across multiple scales, our method avoids the computationally intensive coarse-to-fine propagation and additional image deblurring processes.
arXiv Detail & Related papers (2024-09-02T07:08:17Z) - Pixel-Wise Color Constancy via Smoothness Techniques in Multi-Illuminant
Scenes [16.176896461798993]
We propose a novel multi-illuminant color constancy method, by learning pixel-wise illumination maps caused by multiple light sources.
The proposed method enforces smoothness within neighboring pixels, by regularizing the training with the total variation loss.
A bilateral filter is provisioned further to enhance the natural appearance of the estimated images, while preserving the edges.
arXiv Detail & Related papers (2024-02-05T11:42:19Z) - Dif-Fusion: Towards High Color Fidelity in Infrared and Visible Image
Fusion with Diffusion Models [54.952979335638204]
We propose a novel method with diffusion models, termed as Dif-Fusion, to generate the distribution of the multi-channel input data.
Our method is more effective than other state-of-the-art image fusion methods, especially in color fidelity.
arXiv Detail & Related papers (2023-01-19T13:37:19Z) - Deep Uncalibrated Photometric Stereo via Inter-Intra Image Feature
Fusion [17.686973510425172]
This paper presents a new method for deep uncalibrated photometric stereo.
It efficiently utilizes the inter-image representation to guide the normal estimation.
Our method produces significantly better results than the state-of-the-art methods on both synthetic and real data.
arXiv Detail & Related papers (2022-08-06T03:59:54Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Towards Geometry Guided Neural Relighting with Flash Photography [26.511476565209026]
We propose a framework for image relighting from a single flash photograph with its corresponding depth map using deep learning.
We experimentally validate the advantage of our geometry guided approach over state-of-the-art image-based approaches in intrinsic image decomposition and image relighting.
arXiv Detail & Related papers (2020-08-12T08:03:28Z) - Learning to See Through Obstructions with Layered Decomposition [117.77024641706451]
We present a learning-based approach for removing unwanted obstructions from moving images.
Our method leverages motion differences between the background and obstructing elements to recover both layers.
We show that the proposed approach learned from synthetically generated data performs well to real images.
arXiv Detail & Related papers (2020-08-11T17:59:31Z) - Fast and Accurate Optical Flow based Depth Map Estimation from Light
Fields [22.116100469958436]
We propose a depth estimation method from light fields based on existing optical flow estimation methods.
The different disparity map estimates that we obtain are very consistent, which allows a fast and simple aggregation step to create a single disparity map.
Since the disparity map estimates are consistent, we can also create a depth map from each disparity estimate, and then aggregate the different depth maps in the 3D space to create a single dense depth map.
arXiv Detail & Related papers (2020-08-11T12:53:31Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.