Efficient DWT-based fusion techniques using genetic algorithm for
optimal parameter estimation
- URL: http://arxiv.org/abs/2009.10777v1
- Date: Tue, 22 Sep 2020 19:28:57 GMT
- Title: Efficient DWT-based fusion techniques using genetic algorithm for
optimal parameter estimation
- Authors: S. Kavitha, K. K. Thyagharajan
- Abstract summary: This research work uses discrete wavelet transform (DWT) and undecimated discrete wavelet transform (UDWT)-based fusion techniques.
The proposed fusion model uses an efficient, modified GA in DWT and UDWT for optimal parameter estimation.
It is observed from our experiments that fusion using DWT and UDWT techniques with GA for optimal parameter estimation resulted in a better fused image.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image fusion plays a vital role in medical imaging. Image fusion aims to
integrate complementary as well as redundant information from multiple
modalities into a single fused image without distortion or loss of information.
In this research work, discrete wavelet transform (DWT)and undecimated discrete
wavelet transform (UDWT)-based fusion techniques using genetic algorithm
(GA)foroptimalparameter(weight)estimationinthefusionprocessareimplemented and
analyzed with multi-modality brain images. The lack of shift variance while
performing image fusion using DWT is addressed using UDWT. The proposed fusion
model uses an efficient, modified GA in DWT and UDWT for optimal parameter
estimation, to improve the image quality and contrast. The complexity of the
basic GA (pixel level) has been reduced in the modified GA (feature level), by
limiting the search space. It is observed from our experiments that fusion
using DWT and UDWT techniques with GA for optimal parameter estimation resulted
in a better fused image in the aspects of retaining the information and
contrast without error, both in human perception as well as evaluation using
objective metrics. The contributions of this research work are (1) reduced time
and space complexity in estimating the weight values using GA for fusion (2)
system is scalable for input image of any size with similar time complexity,
owing to feature level GA implementation and (3) identification of source image
that contributes more to the fused image, from the weight values estimated.
Related papers
- Simultaneous Tri-Modal Medical Image Fusion and Super-Resolution using Conditional Diffusion Model [2.507050016527729]
Tri-modal medical image fusion can provide a more comprehensive view of the disease's shape, location, and biological activity.
Due to the limitations of imaging equipment and considerations for patient safety, the quality of medical images is usually limited.
There is an urgent need for a technology that can both enhance image resolution and integrate multi-modal information.
arXiv Detail & Related papers (2024-04-26T12:13:41Z) - FuseFormer: A Transformer for Visual and Thermal Image Fusion [3.6064695344878093]
We propose a novel methodology for the image fusion problem that mitigates the limitations associated with using classical evaluation metrics as loss functions.
Our approach integrates a transformer-based multi-scale fusion strategy that adeptly addresses local and global context information.
Our proposed method, along with the novel loss function definition, demonstrates superior performance compared to other competitive fusion algorithms.
arXiv Detail & Related papers (2024-02-01T19:40:39Z) - A Multi-scale Information Integration Framework for Infrared and Visible
Image Fusion [50.84746752058516]
Infrared and visible image fusion aims at generating a fused image containing intensity and detail information of source images.
Existing methods mostly adopt a simple weight in the loss function to decide the information retention of each modality.
We propose a multi-scale dual attention (MDA) framework for infrared and visible image fusion.
arXiv Detail & Related papers (2023-12-07T14:40:05Z) - A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion [69.10255211811007]
We present a Task-guided, Implicit-searched and Meta- generalizationd (TIM) deep model to address the image fusion problem in a challenging real-world scenario.
Specifically, we propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion.
Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency.
arXiv Detail & Related papers (2023-05-25T08:54:08Z) - Equivariant Multi-Modality Image Fusion [124.11300001864579]
We propose the Equivariant Multi-Modality imAge fusion paradigm for end-to-end self-supervised learning.
Our approach is rooted in the prior knowledge that natural imaging responses are equivariant to certain transformations.
Experiments confirm that EMMA yields high-quality fusion results for infrared-visible and medical images.
arXiv Detail & Related papers (2023-05-19T05:50:24Z) - DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion [144.9653045465908]
We propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM)
Our approach yields promising fusion results in infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2023-03-13T04:06:42Z) - Enhanced Sharp-GAN For Histopathology Image Synthesis [63.845552349914186]
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection.
We propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization.
The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets.
arXiv Detail & Related papers (2023-01-24T17:54:01Z) - Coupled Feature Learning for Multimodal Medical Image Fusion [42.23662451234756]
Multimodal image fusion aims to combine relevant information from images acquired with different sensors.
In this paper, we propose a novel multimodal image fusion method based on coupled dictionary learning.
arXiv Detail & Related papers (2021-02-17T09:13:28Z) - WaveFuse: A Unified Deep Framework for Image Fusion with Discrete
Wavelet Transform [8.164433158925593]
This is the first time the conventional image fusion method has been combined with deep learning.
The proposed algorithm exhibits better fusion performance in both subjective and objective evaluation.
arXiv Detail & Related papers (2020-07-28T10:30:47Z) - A Novel adaptive optimization of Dual-Tree Complex Wavelet Transform for
Medical Image Fusion [0.0]
multimodal image fusion algorithm based on dual-tree complex wavelet transform (DT-CWT) and adaptive particle swarm optimization (APSO) is proposed.
Experiment results show that the proposed method is remarkably better than the method based on particle swarm optimization.
arXiv Detail & Related papers (2020-07-22T15:34:01Z) - Hyperspectral-Multispectral Image Fusion with Weighted LASSO [68.04032419397677]
We propose an approach for fusing hyperspectral and multispectral images to provide high-quality hyperspectral output.
We demonstrate that the proposed sparse fusion and reconstruction provides quantitatively superior results when compared to existing methods on publicly available images.
arXiv Detail & Related papers (2020-03-15T23:07:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.