WaveFuse: A Unified Deep Framework for Image Fusion with Discrete
Wavelet Transform
- URL: http://arxiv.org/abs/2007.14110v4
- Date: Mon, 11 Oct 2021 05:37:27 GMT
- Title: WaveFuse: A Unified Deep Framework for Image Fusion with Discrete
Wavelet Transform
- Authors: Shaolei Liu, Manning Wang, Zhijian Song
- Abstract summary: This is the first time the conventional image fusion method has been combined with deep learning.
The proposed algorithm exhibits better fusion performance in both subjective and objective evaluation.
- Score: 8.164433158925593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an unsupervised image fusion architecture for multiple application
scenarios based on the combination of multi-scale discrete wavelet transform
through regional energy and deep learning. To our best knowledge, this is the
first time the conventional image fusion method has been combined with deep
learning. The useful information of feature maps can be utilized adequately
through multi-scale discrete wavelet transform in our proposed method.Compared
with other state-of-the-art fusion method, the proposed algorithm exhibits
better fusion performance in both subjective and objective evaluation.
Moreover, it's worth mentioning that comparable fusion performance trained in
COCO dataset can be obtained by training with a much smaller dataset with only
hundreds of images chosen randomly from COCO. Hence, the training time is
shortened substantially, leading to the improvement of the model's performance
both in practicality and training efficiency.
Related papers
- Self-Supervised Multi-Scale Network for Blind Image Deblurring via Alternating Optimization [12.082424048578753]
We present a self-supervised multi-scale blind image deblurring method to jointly estimate the latent image and the blur kernel.
Thanks to the collaborative estimation across multiple scales, our method avoids the computationally intensive coarse-to-fine propagation and additional image deblurring processes.
arXiv Detail & Related papers (2024-09-02T07:08:17Z) - Equivariant Multi-Modality Image Fusion [124.11300001864579]
We propose the Equivariant Multi-Modality imAge fusion paradigm for end-to-end self-supervised learning.
Our approach is rooted in the prior knowledge that natural imaging responses are equivariant to certain transformations.
Experiments confirm that EMMA yields high-quality fusion results for infrared-visible and medical images.
arXiv Detail & Related papers (2023-05-19T05:50:24Z) - DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion [144.9653045465908]
We propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM)
Our approach yields promising fusion results in infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2023-03-13T04:06:42Z) - Unsupervised Image Fusion Method based on Feature Mutual Mapping [16.64607158983448]
We propose an unsupervised adaptive image fusion method to address the above issues.
We construct a global map to measure the connections of pixels between the input source images.
Our method achieves superior performance in both visual perception and objective evaluation.
arXiv Detail & Related papers (2022-01-25T07:50:14Z) - TransFuse: A Unified Transformer-based Image Fusion Framework using
Self-supervised Learning [5.849513679510834]
Image fusion is a technique to integrate information from multiple source images with complementary information to improve the richness of a single image.
Two-stage methods avoid the need of large amount of task-specific training data by training encoder-decoder network on large natural image datasets.
We propose a destruction-reconstruction based self-supervised training scheme to encourage the network to learn task-specific features.
arXiv Detail & Related papers (2022-01-19T07:30:44Z) - Deep Reparametrization of Multi-Frame Super-Resolution and Denoising [167.42453826365434]
We propose a deep reparametrization of the maximum a posteriori formulation commonly employed in multi-frame image restoration tasks.
Our approach is derived by introducing a learned error metric and a latent representation of the target image.
We validate our approach through comprehensive experiments on burst denoising and burst super-resolution datasets.
arXiv Detail & Related papers (2021-08-18T17:57:02Z) - Image Fusion Transformer [75.71025138448287]
In image fusion, images obtained from different sensors are fused to generate a single image with enhanced information.
In recent years, state-of-the-art methods have adopted Convolution Neural Networks (CNNs) to encode meaningful features for image fusion.
We propose a novel Image Fusion Transformer (IFT) where we develop a transformer-based multi-scale fusion strategy.
arXiv Detail & Related papers (2021-07-19T16:42:49Z) - Operation-wise Attention Network for Tampering Localization Fusion [15.633461635276337]
In this work, we present a deep learning-based approach for image tampering localization fusion.
This approach is designed to combine the outcomes of multiple image forensics algorithms and provides a fused tampering localization map.
Our fusion framework includes a set of five individual tampering localization methods for splicing localization on JPEG images.
arXiv Detail & Related papers (2021-05-12T08:50:59Z) - Shared Prior Learning of Energy-Based Models for Image Reconstruction [69.72364451042922]
We propose a novel learning-based framework for image reconstruction particularly designed for training without ground truth data.
In the absence of ground truth data, we change the loss functional to a patch-based Wasserstein functional.
In shared prior learning, both aforementioned optimal control problems are optimized simultaneously with shared learned parameters of the regularizer.
arXiv Detail & Related papers (2020-11-12T17:56:05Z) - End-to-End Learning for Simultaneously Generating Decision Map and
Multi-Focus Image Fusion Result [7.564462759345851]
The aim of multi-focus image fusion is to gather focused regions of different images to generate a unique all-in-focus fused image.
Most of the existing deep learning structures failed to balance fusion quality and end-to-end implementation convenience.
We propose a cascade network to simultaneously generate decision map and fused result with an end-to-end training procedure.
arXiv Detail & Related papers (2020-10-17T09:09:51Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.