PAS-MEF: Multi-exposure image fusion based on principal component
analysis, adaptive well-exposedness and saliency map
- URL: http://arxiv.org/abs/2105.11809v1
- Date: Tue, 25 May 2021 10:22:43 GMT
- Title: PAS-MEF: Multi-exposure image fusion based on principal component
analysis, adaptive well-exposedness and saliency map
- Authors: Diclehan Karakaya, Oguzhan Ulucan, Mehmet Turkan
- Abstract summary: With regular low dynamic range (LDR) capture/display devices, significant details may not be preserved in images due to the huge dynamic range of natural scenes.
This study proposes an efficient multi-exposure fusion (MEF) approach with a simple yet effective weight extraction method.
Experimental comparisons with existing techniques demonstrate that the proposed method produces very strong statistical and visual results.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: High dynamic range (HDR) imaging enables to immortalize natural scenes
similar to the way that they are perceived by human observers. With regular low
dynamic range (LDR) capture/display devices, significant details may not be
preserved in images due to the huge dynamic range of natural scenes. To
minimize the information loss and produce high quality HDR-like images for LDR
screens, this study proposes an efficient multi-exposure fusion (MEF) approach
with a simple yet effective weight extraction method relying on principal
component analysis, adaptive well-exposedness and saliency maps. These weight
maps are later refined through a guided filter and the fusion is carried out by
employing a pyramidal decomposition. Experimental comparisons with existing
techniques demonstrate that the proposed method produces very strong
statistical and visual results.
Related papers
- Unsupervised Learning Based Multi-Scale Exposure Fusion [9.152843503286796]
Unsupervised learning based multi-scale exposure fusion (ULMEF) is efficient for fusing differently exposed low dynamic range (LDR) images into a higher quality LDR image for a high dynamic range scene.
In this paper, novel loss functions are proposed for the ULMEF and they are defined by using all the images to be fused and other differently exposed images from the same HDR scene.
arXiv Detail & Related papers (2024-09-26T13:29:40Z) - Semantic Aware Diffusion Inverse Tone Mapping [5.65968650127342]
Inverse tone mapping attempts to boost captured Standard Dynamic Range (SDR) images back to High Dynamic Range ( HDR)
We present a novel inverse tone mapping approach for mapping SDR images to HDR that generates lost details in clipped regions through a semantic-aware diffusion based inpainting approach.
arXiv Detail & Related papers (2024-05-24T11:44:22Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - Multi-Exposure HDR Composition by Gated Swin Transformer [8.619880437958525]
This paper provides a novel multi-exposure fusion model based on Swin Transformer.
We exploit the long distance contextual dependency in the exposure-space pyramid by the self-attention mechanism.
Experiments show that our model achieves the accuracy on par with current top performing multi-exposure HDR imaging models.
arXiv Detail & Related papers (2023-03-15T15:38:43Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z) - Recurrent Exposure Generation for Low-Light Face Detection [113.25331155337759]
We propose a novel Recurrent Exposure Generation (REG) module and a Multi-Exposure Detection (MED) module.
REG produces progressively and efficiently intermediate images corresponding to various exposure settings.
Such pseudo-exposures are then fused by MED to detect faces across different lighting conditions.
arXiv Detail & Related papers (2020-07-21T17:30:51Z) - Extreme Low-Light Imaging with Multi-granulation Cooperative Networks [18.438827277749525]
Low-light imaging is challenging since images may appear to be dark and noised due to low signal-to-noise ratio, complex image content, and variety in shooting scenes in extreme low-light condition.
Many methods have been proposed to enhance the imaging quality under extreme low-light conditions, but it remains difficult to obtain satisfactory results.
arXiv Detail & Related papers (2020-05-16T14:26:06Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.