Efficient DWT-based fusion techniques using genetic algorithm for
optimal parameter estimation
- URL: http://arxiv.org/abs/2009.10777v1
- Date: Tue, 22 Sep 2020 19:28:57 GMT
- Title: Efficient DWT-based fusion techniques using genetic algorithm for
optimal parameter estimation
- Authors: S. Kavitha, K. K. Thyagharajan
- Abstract summary: This research work uses discrete wavelet transform (DWT) and undecimated discrete wavelet transform (UDWT)-based fusion techniques.
The proposed fusion model uses an efficient, modified GA in DWT and UDWT for optimal parameter estimation.
It is observed from our experiments that fusion using DWT and UDWT techniques with GA for optimal parameter estimation resulted in a better fused image.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image fusion plays a vital role in medical imaging. Image fusion aims to
integrate complementary as well as redundant information from multiple
modalities into a single fused image without distortion or loss of information.
In this research work, discrete wavelet transform (DWT)and undecimated discrete
wavelet transform (UDWT)-based fusion techniques using genetic algorithm
(GA)foroptimalparameter(weight)estimationinthefusionprocessareimplemented and
analyzed with multi-modality brain images. The lack of shift variance while
performing image fusion using DWT is addressed using UDWT. The proposed fusion
model uses an efficient, modified GA in DWT and UDWT for optimal parameter
estimation, to improve the image quality and contrast. The complexity of the
basic GA (pixel level) has been reduced in the modified GA (feature level), by
limiting the search space. It is observed from our experiments that fusion
using DWT and UDWT techniques with GA for optimal parameter estimation resulted
in a better fused image in the aspects of retaining the information and
contrast without error, both in human perception as well as evaluation using
objective metrics. The contributions of this research work are (1) reduced time
and space complexity in estimating the weight values using GA for fusion (2)
system is scalable for input image of any size with similar time complexity,
owing to feature level GA implementation and (3) identification of source image
that contributes more to the fused image, from the weight values estimated.
Related papers
- Test-Time Dynamic Image Fusion [45.551196908423606]
In this paper, we give our solution from a generalization perspective.
We decompose the fused image into multiple components corresponding to its source data.
We prove that the key to reducing generalization error hinges on the negative correlation between the RD-based fusion weight and the uni-source reconstruction loss.
arXiv Detail & Related papers (2024-11-05T06:23:44Z) - Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - A Lightweight GAN-Based Image Fusion Algorithm for Visible and Infrared Images [4.473596922028091]
This paper presents a lightweight image fusion algorithm specifically designed for merging visible light and infrared images.
The proposed method enhances the generator in a Generative Adversarial Network (GAN) by integrating the Convolutional Block Attention Module.
Experiments using the M3FD dataset demonstrate that the proposed algorithm outperforms similar image fusion methods in terms of fusion quality.
arXiv Detail & Related papers (2024-09-07T18:04:39Z) - FuseFormer: A Transformer for Visual and Thermal Image Fusion [3.6064695344878093]
We propose a novel methodology for the image fusion problem that mitigates the limitations associated with using classical evaluation metrics as loss functions.
Our approach integrates a transformer-based multi-scale fusion strategy that adeptly addresses local and global context information.
Our proposed method, along with the novel loss function definition, demonstrates superior performance compared to other competitive fusion algorithms.
arXiv Detail & Related papers (2024-02-01T19:40:39Z) - A Multi-scale Information Integration Framework for Infrared and Visible
Image Fusion [50.84746752058516]
Infrared and visible image fusion aims at generating a fused image containing intensity and detail information of source images.
Existing methods mostly adopt a simple weight in the loss function to decide the information retention of each modality.
We propose a multi-scale dual attention (MDA) framework for infrared and visible image fusion.
arXiv Detail & Related papers (2023-12-07T14:40:05Z) - A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion [69.10255211811007]
We present a Task-guided, Implicit-searched and Meta- generalizationd (TIM) deep model to address the image fusion problem in a challenging real-world scenario.
Specifically, we propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion.
Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency.
arXiv Detail & Related papers (2023-05-25T08:54:08Z) - Equivariant Multi-Modality Image Fusion [124.11300001864579]
We propose the Equivariant Multi-Modality imAge fusion paradigm for end-to-end self-supervised learning.
Our approach is rooted in the prior knowledge that natural imaging responses are equivariant to certain transformations.
Experiments confirm that EMMA yields high-quality fusion results for infrared-visible and medical images.
arXiv Detail & Related papers (2023-05-19T05:50:24Z) - DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion [144.9653045465908]
We propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM)
Our approach yields promising fusion results in infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2023-03-13T04:06:42Z) - Enhanced Sharp-GAN For Histopathology Image Synthesis [63.845552349914186]
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection.
We propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization.
The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets.
arXiv Detail & Related papers (2023-01-24T17:54:01Z) - Coupled Feature Learning for Multimodal Medical Image Fusion [42.23662451234756]
Multimodal image fusion aims to combine relevant information from images acquired with different sensors.
In this paper, we propose a novel multimodal image fusion method based on coupled dictionary learning.
arXiv Detail & Related papers (2021-02-17T09:13:28Z) - A Novel adaptive optimization of Dual-Tree Complex Wavelet Transform for
Medical Image Fusion [0.0]
multimodal image fusion algorithm based on dual-tree complex wavelet transform (DT-CWT) and adaptive particle swarm optimization (APSO) is proposed.
Experiment results show that the proposed method is remarkably better than the method based on particle swarm optimization.
arXiv Detail & Related papers (2020-07-22T15:34:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.