Perceptual Multi-Exposure Fusion
- URL: http://arxiv.org/abs/2210.09604v2
- Date: Wed, 19 Oct 2022 06:58:48 GMT
- Title: Perceptual Multi-Exposure Fusion
- Authors: Xiaoning Liu
- Abstract summary: This paper presents a perceptual multi-exposure fusion method that ensures fine shadow/highlight details but with lower complexity than detailenhanced methods.
We build a large-scale multiexposure benchmark dataset suitable for static scenes, which contains 167 image sequences.
Experiments on the constructed dataset demonstrate that the proposed method exceeds existing eight state-of-the-art approaches in terms of visually and MEF-SSIM value.
- Score: 0.5076419064097732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an ever-increasing demand for high dynamic range (HDR) scene shooting,
multi-exposure image fusion (MEF) technology has abounded. In recent years,
multi-scale exposure fusion approaches based on detail-enhancement have led the
way for improvement in highlight and shadow details. Most of such methods,
however, are too computationally expensive to be deployed on mobile devices.
This paper presents a perceptual multi-exposure fusion method that not just
ensures fine shadow/highlight details but with lower complexity than
detailenhanced methods. We analyze the potential defects of three classical
exposure measures in lieu of using detail-enhancement component and improve two
of them, namely adaptive Wellexposedness (AWE) and the gradient of color images
(3-D gradient). AWE designed in YCbCr color space considers the difference
between varying exposure images. 3-D gradient is employed to extract fine
details. We build a large-scale multiexposure benchmark dataset suitable for
static scenes, which contains 167 image sequences all told. Experiments on the
constructed dataset demonstrate that the proposed method exceeds existing eight
state-of-the-art approaches in terms of visually and MEF-SSIM value. Moreover,
our approach can achieve a better improvement for current image enhancement
techniques, ensuring fine detail in bright light.
Related papers
- Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - Latent Feature-Guided Diffusion Models for Shadow Removal [50.02857194218859]
We propose the use of diffusion models as they offer a promising approach to gradually refine the details of shadow regions during the diffusion process.
Our method improves this process by conditioning on a learned latent feature space that inherits the characteristics of shadow-free images.
We demonstrate the effectiveness of our approach which outperforms the previous best method by 13% in terms of RMSE on the AISTD dataset.
arXiv Detail & Related papers (2023-12-04T18:59:55Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for
Loss-free Multi-Exposure Image Fusion [60.221404321514086]
Multi-exposure image fusion (MEF) has emerged as a prominent solution to address the limitations of digital imaging in representing varied exposure levels.
This paper presents a Hybrid-Supervised Dual-Search approach for MEF, dubbed HSDS-MEF, which introduces a bi-level optimization search scheme for automatic design of both network structures and loss functions.
arXiv Detail & Related papers (2023-09-03T08:07:26Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - Multi-Exposure HDR Composition by Gated Swin Transformer [8.619880437958525]
This paper provides a novel multi-exposure fusion model based on Swin Transformer.
We exploit the long distance contextual dependency in the exposure-space pyramid by the self-attention mechanism.
Experiments show that our model achieves the accuracy on par with current top performing multi-exposure HDR imaging models.
arXiv Detail & Related papers (2023-03-15T15:38:43Z) - Variational Approach for Intensity Domain Multi-exposure Image Fusion [11.678822620192435]
We present a method to produce well-exposed fused image that can be displayed directly on conventional display devices.
The ambition is to preserve details in poorly illuminated and brightly illuminated regions.
arXiv Detail & Related papers (2022-07-09T06:31:34Z) - Bridge the Vision Gap from Field to Command: A Deep Learning Network
Enhancing Illumination and Details [17.25188250076639]
We propose a two-stream framework named NEID to tune up the brightness and enhance the details simultaneously.
The proposed method consists of three parts: Light Enhancement (LE), Detail Refinement (DR) and Feature Fusing (FF) module.
arXiv Detail & Related papers (2021-01-20T09:39:57Z) - Learning Multi-Scale Photo Exposure Correction [51.57836446833474]
Capturing photographs with wrong exposures remains a major source of errors in camera-based imaging.
We propose a coarse-to-fine deep neural network (DNN) model, trainable in an end-to-end manner, that addresses each sub-problem separately.
Our method achieves results on par with existing state-of-the-art methods on underexposed images.
arXiv Detail & Related papers (2020-03-25T19:33:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.