Efficient joint noise removal and multi exposure fusion
- URL: http://arxiv.org/abs/2112.03701v1
- Date: Sat, 4 Dec 2021 09:30:10 GMT
- Title: Efficient joint noise removal and multi exposure fusion
- Authors: A. Buades, J.L Lisani, O. Martorell
- Abstract summary: Multi-exposure fusion (MEF) is a technique for combining different images of the same scene acquired with different exposure settings into a single image.
We propose a novel multi-exposure image fusion chain taking into account noise removal.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-exposure fusion (MEF) is a technique for combining different images of
the same scene acquired with different exposure settings into a single image.
All the proposed MEF algorithms combine the set of images, somehow choosing
from each one the part with better exposure.
We propose a novel multi-exposure image fusion chain taking into account
noise removal. The novel method takes advantage of DCT processing and the
multi-image nature of the MEF problem. We propose a joint fusion and denoising
strategy taking advantage of spatio-temporal patch selection and collaborative
3D thresholding. The overall strategy permits to denoise and fuse the set of
images without the need of recovering each denoised exposure image, leading to
a very efficient procedure.
Related papers
- Retinex-MEF: Retinex-based Glare Effects Aware Unsupervised Multi-Exposure Image Fusion [15.733055563028039]
Multi-exposure image fusion consolidates multiple low dynamic range images of the same scene into a singular high dynamic range image.
We introduce an unsupervised and controllable method termedtextbf(Retinex-MEF) to better adapt Retinex theory for multi-exposure image fusion.
arXiv Detail & Related papers (2025-03-10T12:19:03Z) - Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - Unsupervised Learning Based Multi-Scale Exposure Fusion [9.152843503286796]
Unsupervised learning based multi-scale exposure fusion (ULMEF) is efficient for fusing differently exposed low dynamic range (LDR) images into a higher quality LDR image for a high dynamic range scene.
In this paper, novel loss functions are proposed for the ULMEF and they are defined by using all the images to be fused and other differently exposed images from the same HDR scene.
arXiv Detail & Related papers (2024-09-26T13:29:40Z) - A Dual Domain Multi-exposure Image Fusion Network based on the
Spatial-Frequency Integration [57.14745782076976]
Multi-exposure image fusion aims to generate a single high-dynamic image by integrating images with different exposures.
We propose a novelty perspective on multi-exposure image fusion via the Spatial-Frequency Integration Framework, named MEF-SFI.
Our method achieves visual-appealing fusion results against state-of-the-art multi-exposure image fusion approaches.
arXiv Detail & Related papers (2023-12-17T04:45:15Z) - Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for
Loss-free Multi-Exposure Image Fusion [60.221404321514086]
Multi-exposure image fusion (MEF) has emerged as a prominent solution to address the limitations of digital imaging in representing varied exposure levels.
This paper presents a Hybrid-Supervised Dual-Search approach for MEF, dubbed HSDS-MEF, which introduces a bi-level optimization search scheme for automatic design of both network structures and loss functions.
arXiv Detail & Related papers (2023-09-03T08:07:26Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - Exposure Fusion for Hand-held Camera Inputs with Optical Flow and
PatchMatch [53.149395644547226]
We propose a hybrid synthesis method for multi-exposure image fusion taken by hand-held cameras.
Our method can deal with such motions and maintain the exposure information of each input effectively.
Experiment results demonstrate the effectiveness and robustness of our method.
arXiv Detail & Related papers (2023-04-10T09:06:37Z) - Self-Supervised Super-Resolution for Multi-Exposure Push-Frame
Satellites [13.267489927661797]
The proposed method can handle the signal-dependent noise in the inputs, process sequences of any length, and be robust to inaccuracies in the exposure times.
It can be trained end-to-end with self-supervision, without requiring ground truth high resolution frames.
We evaluate the proposed method on synthetic and real data and show that it outperforms existing single-exposure approaches.
arXiv Detail & Related papers (2022-05-04T12:42:57Z) - Joint denoising and HDR for RAW video sequences [0.0]
We propose a patch-based method for simultaneous denoising and fusion of RAW multi-exposure images.
We show that the proposed method permits to obtain state-of-the-art fusion results with real RAW data.
arXiv Detail & Related papers (2022-01-18T15:47:41Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Coupled Feature Learning for Multimodal Medical Image Fusion [42.23662451234756]
Multimodal image fusion aims to combine relevant information from images acquired with different sensors.
In this paper, we propose a novel multimodal image fusion method based on coupled dictionary learning.
arXiv Detail & Related papers (2021-02-17T09:13:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.