Retinex-MEF: Retinex-based Glare Effects Aware Unsupervised Multi-Exposure Image Fusion
- URL: http://arxiv.org/abs/2503.07235v1
- Date: Mon, 10 Mar 2025 12:19:03 GMT
- Title: Retinex-MEF: Retinex-based Glare Effects Aware Unsupervised Multi-Exposure Image Fusion
- Authors: Haowen Bai, Jiangshe Zhang, Zixiang Zhao, Lilun Deng, Yukun Cui, Shuang Xu,
- Abstract summary: Multi-exposure image fusion consolidates multiple low dynamic range images of the same scene into a singular high dynamic range image.<n>We introduce an unsupervised and controllable method termedtextbf(Retinex-MEF) to better adapt Retinex theory for multi-exposure image fusion.
- Score: 15.733055563028039
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-exposure image fusion consolidates multiple low dynamic range images of the same scene into a singular high dynamic range image. Retinex theory, which separates image illumination from scene reflectance, is naturally adopted to ensure consistent scene representation and effective information fusion across varied exposure levels. However, the conventional pixel-wise multiplication of illumination and reflectance inadequately models the glare effect induced by overexposure. To better adapt this theory for multi-exposure image fusion, we introduce an unsupervised and controllable method termed~\textbf{(Retinex-MEF)}. Specifically, our method decomposes multi-exposure images into separate illumination components and a shared reflectance component, and effectively modeling the glare induced by overexposure. Employing a bidirectional loss constraint to learn the common reflectance component, our approach effectively mitigates the glare effect. Furthermore, we establish a controllable exposure fusion criterion, enabling global exposure adjustments while preserving contrast, thus overcoming the constraints of fixed-level fusion. A series of experiments across multiple datasets, including underexposure-overexposure fusion, exposure control fusion, and homogeneous extreme exposure fusion, demonstrate the effective decomposition and flexible fusion capability of our model.
Related papers
- Unsupervised Learning Based Multi-Scale Exposure Fusion [9.152843503286796]
Unsupervised learning based multi-scale exposure fusion (ULMEF) is efficient for fusing differently exposed low dynamic range (LDR) images into a higher quality LDR image for a high dynamic range scene.
In this paper, novel loss functions are proposed for the ULMEF and they are defined by using all the images to be fused and other differently exposed images from the same HDR scene.
arXiv Detail & Related papers (2024-09-26T13:29:40Z) - Retinex-Diffusion: On Controlling Illumination Conditions in Diffusion Models via Retinex Theory [19.205929427075965]
We conceptualize the diffusion model as a black-box image render and strategically decompose its energy function in alignment with the image formation model.
It generates images with realistic illumination effects, including cast shadow, soft shadow, and inter-reflections.
arXiv Detail & Related papers (2024-07-29T03:15:07Z) - Region-Aware Exposure Consistency Network for Mixed Exposure Correction [26.30138794484646]
We introduce an effective Region-aware Exposure Correction Network (RECNet) that can handle mixed exposure.
We develop a region-aware de-exposure module that effectively translates regional features of mixed exposure scenarios into an exposure-invariant feature space.
We propose an exposure contrastive regularization strategy under the constraints of intra-regional exposure consistency and inter-regional exposure continuity.
arXiv Detail & Related papers (2024-02-28T10:24:36Z) - Decomposition-based and Interference Perception for Infrared and Visible
Image Fusion in Complex Scenes [4.919706769234434]
We propose a decomposition-based and interference perception image fusion method.
We classify the pixels of visible image from the degree of scattering of light transmission, based on which we then separate the detail and energy information of the image.
This refined decomposition facilitates the proposed model in identifying more interfering pixels that are in complex scenes.
arXiv Detail & Related papers (2024-02-03T09:27:33Z) - A Dual Domain Multi-exposure Image Fusion Network based on the
Spatial-Frequency Integration [57.14745782076976]
Multi-exposure image fusion aims to generate a single high-dynamic image by integrating images with different exposures.
We propose a novelty perspective on multi-exposure image fusion via the Spatial-Frequency Integration Framework, named MEF-SFI.
Our method achieves visual-appealing fusion results against state-of-the-art multi-exposure image fusion approaches.
arXiv Detail & Related papers (2023-12-17T04:45:15Z) - Fearless Luminance Adaptation: A Macro-Micro-Hierarchical Transformer
for Exposure Correction [65.5397271106534]
A single neural network is difficult to handle all exposure problems.
In particular, convolutions hinder the ability to restore faithful color or details on extremely over-/under- exposed regions.
We propose a Macro-Micro-Hierarchical transformer, which consists of a macro attention to capture long-range dependencies, a micro attention to extract local features, and a hierarchical structure for coarse-to-fine correction.
arXiv Detail & Related papers (2023-09-02T09:07:36Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - Efficient joint noise removal and multi exposure fusion [0.0]
Multi-exposure fusion (MEF) is a technique for combining different images of the same scene acquired with different exposure settings into a single image.
We propose a novel multi-exposure image fusion chain taking into account noise removal.
arXiv Detail & Related papers (2021-12-04T09:30:10Z) - Learning Flow-based Feature Warping for Face Frontalization with
Illumination Inconsistent Supervision [73.18554605744842]
Flow-based Feature Warping Model (FFWM) learns to synthesize photo-realistic and illumination preserving frontal images.
An Illumination Preserving Module (IPM) is proposed to learn illumination preserving image synthesis.
A Warp Attention Module (WAM) is introduced to reduce the pose discrepancy in the feature level.
arXiv Detail & Related papers (2020-08-16T06:07:00Z) - Recurrent Exposure Generation for Low-Light Face Detection [113.25331155337759]
We propose a novel Recurrent Exposure Generation (REG) module and a Multi-Exposure Detection (MED) module.
REG produces progressively and efficiently intermediate images corresponding to various exposure settings.
Such pseudo-exposures are then fused by MED to detect faces across different lighting conditions.
arXiv Detail & Related papers (2020-07-21T17:30:51Z) - Learning Multi-Scale Photo Exposure Correction [51.57836446833474]
Capturing photographs with wrong exposures remains a major source of errors in camera-based imaging.
We propose a coarse-to-fine deep neural network (DNN) model, trainable in an end-to-end manner, that addresses each sub-problem separately.
Our method achieves results on par with existing state-of-the-art methods on underexposed images.
arXiv Detail & Related papers (2020-03-25T19:33:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.