Variational Approach for Intensity Domain Multi-exposure Image Fusion
- URL: http://arxiv.org/abs/2207.04204v1
- Date: Sat, 9 Jul 2022 06:31:34 GMT
- Title: Variational Approach for Intensity Domain Multi-exposure Image Fusion
- Authors: Harbinder Singh, Dinesh Arora, Vinay Kumar
- Abstract summary: We present a method to produce well-exposed fused image that can be displayed directly on conventional display devices.
The ambition is to preserve details in poorly illuminated and brightly illuminated regions.
- Score: 11.678822620192435
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent innovations shows that blending of details captured by single Low
Dynamic Range (LDR) sensor overcomes the limitations of standard digital
cameras to capture details from high dynamic range scene. We present a method
to produce well-exposed fused image that can be displayed directly on
conventional display devices. The ambition is to preserve details in poorly
illuminated and brightly illuminated regions. Proposed approach does not
require true radiance reconstruction and tone manipulation steps. The aforesaid
objective is achieved by taking into account local information measure that
select well-exposed regions across input exposures. In addition, Contrast
Limited Adaptive Histogram equalization (CLAHE) is introduced to improve
uniformity of input multi-exposure image prior to fusion.
Related papers
- Scene-Segmentation-Based Exposure Compensation for Tone Mapping of High Dynamic Range Scenes [8.179779837795754]
We propose a novel scene-segmentation-based exposure compensation method for multiexposure image fusion (MEF) based tone mapping.
Our approach generates a stack of differently exposed images from an input HDR image and fuses them into a single image.
arXiv Detail & Related papers (2024-10-21T04:50:02Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Dimma: Semi-supervised Low Light Image Enhancement with Adaptive Dimming [0.728258471592763]
Enhancing low-light images while maintaining natural colors is a challenging problem due to camera processing variations.
We propose Dimma, a semi-supervised approach that aligns with any camera by utilizing a small set of image pairs.
We achieve that by introducing a convolutional mixture density network that generates distorted colors of the scene based on the illumination differences.
arXiv Detail & Related papers (2023-10-14T17:59:46Z) - High Dynamic Range Imaging of Dynamic Scenes with Saturation
Compensation but without Explicit Motion Compensation [20.911738532410766]
High dynamic range (LDR) imaging is a highly challenging task since a large amount of information is lost due to the limitations of camera sensors.
For HDR imaging, some methods capture multiple low dynamic range (LDR) images with altering exposures to aggregate more information.
Most existing methods focus on motion compensation to reduce the ghosting artifacts, but they still produce unsatisfying results.
arXiv Detail & Related papers (2023-08-22T02:44:03Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Multi-Exposure HDR Composition by Gated Swin Transformer [8.619880437958525]
This paper provides a novel multi-exposure fusion model based on Swin Transformer.
We exploit the long distance contextual dependency in the exposure-space pyramid by the self-attention mechanism.
Experiments show that our model achieves the accuracy on par with current top performing multi-exposure HDR imaging models.
arXiv Detail & Related papers (2023-03-15T15:38:43Z) - Perceptual Multi-Exposure Fusion [0.5076419064097732]
This paper presents a perceptual multi-exposure fusion method that ensures fine shadow/highlight details but with lower complexity than detailenhanced methods.
We build a large-scale multiexposure benchmark dataset suitable for static scenes, which contains 167 image sequences.
Experiments on the constructed dataset demonstrate that the proposed method exceeds existing eight state-of-the-art approaches in terms of visually and MEF-SSIM value.
arXiv Detail & Related papers (2022-10-18T05:34:58Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - CuDi: Curve Distillation for Efficient and Controllable Exposure
Adjustment [86.97592472794724]
We present Curve Distillation, CuDi, for efficient and controllable exposure adjustment without the requirement of paired or unpaired data.
Our method inherits the zero-reference learning and curve-based framework from an effective low-light image enhancement method, Zero-DCE.
We show that our method is appealing for its fast, robust, and flexible performance, outperforming state-of-the-art methods in real scenes.
arXiv Detail & Related papers (2022-07-28T17:53:46Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Attention-Guided Progressive Neural Texture Fusion for High Dynamic
Range Image Restoration [48.02238732099032]
In this work, we propose an Attention-guided Progressive Neural Texture Fusion (APNT-Fusion) HDR restoration model.
An efficient two-stream structure is proposed which separately focuses on texture feature transfer over saturated regions and multi-exposure tonal and texture feature fusion.
A progressive texture blending module is designed to blend the encoded two-stream features in a multi-scale and progressive manner.
arXiv Detail & Related papers (2021-07-13T16:07:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.