A Dual Domain Multi-exposure Image Fusion Network based on the
Spatial-Frequency Integration
- URL: http://arxiv.org/abs/2312.10604v1
- Date: Sun, 17 Dec 2023 04:45:15 GMT
- Title: A Dual Domain Multi-exposure Image Fusion Network based on the
Spatial-Frequency Integration
- Authors: Guang Yang, Jie Li, Xinbo Gao
- Abstract summary: Multi-exposure image fusion aims to generate a single high-dynamic image by integrating images with different exposures.
We propose a novelty perspective on multi-exposure image fusion via the Spatial-Frequency Integration Framework, named MEF-SFI.
Our method achieves visual-appealing fusion results against state-of-the-art multi-exposure image fusion approaches.
- Score: 57.14745782076976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-exposure image fusion aims to generate a single high-dynamic image by
integrating images with different exposures. Existing deep learning-based
multi-exposure image fusion methods primarily focus on spatial domain fusion,
neglecting the global modeling ability of the frequency domain. To effectively
leverage the global illumination modeling ability of the frequency domain, we
propose a novelty perspective on multi-exposure image fusion via the
Spatial-Frequency Integration Framework, named MEF-SFI. Initially, we revisit
the properties of the Fourier transform on the 2D image, and verify the
feasibility of multi-exposure image fusion on the frequency domain where the
amplitude and phase component is able to guide the integration of the
illumination information. Subsequently, we present the deep Fourier-based
multi-exposure image fusion framework, which consists of a spatial path and
frequency path for local and global modeling separately. Specifically, we
introduce a Spatial-Frequency Fusion Block to facilitate efficient interaction
between dual domains and capture complementary information from input images
with different exposures. Finally, we combine a dual domain loss function to
ensure the retention of complementary information in both the spatial and
frequency domains. Extensive experiments on the PQA-MEF dataset demonstrate
that our method achieves visual-appealing fusion results against
state-of-the-art multi-exposure image fusion approaches. Our code is available
at https://github.com/SSyangguang/MEF-freq.
Related papers
- SFDFusion: An Efficient Spatial-Frequency Domain Fusion Network for Infrared and Visible Image Fusion [11.46957526079837]
Infrared and visible image fusion aims to generate fused images with prominent targets and rich texture details.
This paper proposes an efficient Spatial-Frequency Domain Fusion network for infrared and visible image fusion.
Our method produces fused images with significant advantages in various fusion metrics and visual effects.
arXiv Detail & Related papers (2024-10-30T09:17:23Z) - Spatial-frequency Dual-Domain Feature Fusion Network for Low-Light Remote Sensing Image Enhancement [49.15531684596958]
We propose a Dual-Domain Feature Fusion Network (DFFN) for low-light remote sensing image enhancement.
The first phase learns amplitude information to restore image brightness, and the second phase learns phase information to refine details.
We have constructed two dark light remote sensing datasets to address the current lack of datasets in dark light remote sensing image enhancement.
arXiv Detail & Related papers (2024-04-26T13:21:31Z) - SSDiff: Spatial-spectral Integrated Diffusion Model for Remote Sensing Pansharpening [14.293042131263924]
We introduce a spatial-spectral integrated diffusion model for the remote sensing pansharpening task, called SSDiff.
SSDiff considers the pansharpening process as the fusion process of spatial and spectral components from the perspective of subspace decomposition.
arXiv Detail & Related papers (2024-04-17T16:30:56Z) - Bridging the Gap between Multi-focus and Multi-modal: A Focused
Integration Framework for Multi-modal Image Fusion [5.417493475406649]
Multi-modal image fusion (MMIF) integrates valuable information from different modality images into a fused one.
This paper proposes a MMIF framework for joint focused integration and modalities information extraction.
The proposed algorithm can surpass the state-of-the-art methods in visual perception and quantitative evaluation.
arXiv Detail & Related papers (2023-11-03T12:58:39Z) - Unified Frequency-Assisted Transformer Framework for Detecting and
Grounding Multi-Modal Manipulation [109.1912721224697]
We present the Unified Frequency-Assisted transFormer framework, named UFAFormer, to address the DGM4 problem.
By leveraging the discrete wavelet transform, we decompose images into several frequency sub-bands, capturing rich face forgery artifacts.
Our proposed frequency encoder, incorporating intra-band and inter-band self-attentions, explicitly aggregates forgery features within and across diverse sub-bands.
arXiv Detail & Related papers (2023-09-18T11:06:42Z) - Mutual Information-driven Triple Interaction Network for Efficient Image
Dehazing [54.168567276280505]
We propose a novel Mutual Information-driven Triple interaction Network (MITNet) for image dehazing.
The first stage, named amplitude-guided haze removal, aims to recover the amplitude spectrum of the hazy images for haze removal.
The second stage, named phase-guided structure refined, devotes to learning the transformation and refinement of the phase spectrum.
arXiv Detail & Related papers (2023-08-14T08:23:58Z) - Multi-modal Gated Mixture of Local-to-Global Experts for Dynamic Image
Fusion [59.19469551774703]
Infrared and visible image fusion aims to integrate comprehensive information from multiple sources to achieve superior performances on various practical tasks.
We propose a dynamic image fusion framework with a multi-modal gated mixture of local-to-global experts.
Our model consists of a Mixture of Local Experts (MoLE) and a Mixture of Global Experts (MoGE) guided by a multi-modal gate.
arXiv Detail & Related papers (2023-02-02T20:06:58Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.