A New Multimodal Medical Image Fusion based on Laplacian Autoencoder
with Channel Attention
- URL: http://arxiv.org/abs/2310.11896v1
- Date: Wed, 18 Oct 2023 11:29:53 GMT
- Title: A New Multimodal Medical Image Fusion based on Laplacian Autoencoder
with Channel Attention
- Authors: Payal Wankhede, Manisha Das, Deep Gupta, Petia Radeva, and Ashwini M
Bakde
- Abstract summary: Deep learning models have achieved end-to-end image fusion with highly robust and accurate performance.
Most DL-based fusion models perform down-sampling on the input images to minimize the number of learnable parameters and computations.
We propose a new multimodal medical image fusion model is proposed that is based on integrated Laplacian-Gaussian concatenation with attention pooling.
- Score: 3.1531360678320897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical image fusion combines the complementary information of multimodal
medical images to assist medical professionals in the clinical diagnosis of
patients' disorders and provide guidance during preoperative and
intra-operative procedures. Deep learning (DL) models have achieved end-to-end
image fusion with highly robust and accurate fusion performance. However, most
DL-based fusion models perform down-sampling on the input images to minimize
the number of learnable parameters and computations. During this process,
salient features of the source images become irretrievable leading to the loss
of crucial diagnostic edge details and contrast of various brain tissues. In
this paper, we propose a new multimodal medical image fusion model is proposed
that is based on integrated Laplacian-Gaussian concatenation with attention
pooling (LGCA). We prove that our model preserves effectively complementary
information and important tissue structures.
Related papers
- Simultaneous Tri-Modal Medical Image Fusion and Super-Resolution using Conditional Diffusion Model [2.507050016527729]
Tri-modal medical image fusion can provide a more comprehensive view of the disease's shape, location, and biological activity.
Due to the limitations of imaging equipment and considerations for patient safety, the quality of medical images is usually limited.
There is an urgent need for a technology that can both enhance image resolution and integrate multi-modal information.
arXiv Detail & Related papers (2024-04-26T12:13:41Z) - Multi-modal Medical Neurological Image Fusion using Wavelet Pooled Edge
Preserving Autoencoder [3.3828292731430545]
This paper presents an end-to-end unsupervised fusion model for multimodal medical images based on an edge-preserving dense autoencoder network.
In the proposed model, feature extraction is improved by using wavelet decomposition-based attention pooling of feature maps.
The proposed model is trained on a variety of medical image pairs which helps in capturing the intensity distributions of the source images.
arXiv Detail & Related papers (2023-10-18T11:59:35Z) - Three-Dimensional Medical Image Fusion with Deformable Cross-Attention [10.26573411162757]
Multimodal medical image fusion plays an instrumental role in several areas of medical image processing.
Traditional fusion methods tend to process each modality independently before combining the features and reconstructing the fusion image.
In this study, we introduce an innovative unsupervised feature mutual learning fusion network designed to rectify these limitations.
arXiv Detail & Related papers (2023-10-10T04:10:56Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - A Transformer-based representation-learning model with unified
processing of multimodal input for clinical diagnostics [63.106382317917344]
We report a Transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner.
The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary diseases.
arXiv Detail & Related papers (2023-06-01T16:23:47Z) - Multi-task Paired Masking with Alignment Modeling for Medical
Vision-Language Pre-training [55.56609500764344]
We propose a unified framework based on Multi-task Paired Masking with Alignment (MPMA) to integrate the cross-modal alignment task into the joint image-text reconstruction framework.
We also introduce a Memory-Augmented Cross-Modal Fusion (MA-CMF) module to fully integrate visual information to assist report reconstruction.
arXiv Detail & Related papers (2023-05-13T13:53:48Z) - MedSegDiff-V2: Diffusion based Medical Image Segmentation with
Transformer [53.575573940055335]
We propose a novel Transformer-based Diffusion framework, called MedSegDiff-V2.
We verify its effectiveness on 20 medical image segmentation tasks with different image modalities.
arXiv Detail & Related papers (2023-01-19T03:42:36Z) - An Attention-based Multi-Scale Feature Learning Network for Multimodal
Medical Image Fusion [24.415389503712596]
Multimodal medical images could provide rich information about patients for physicians to diagnose.
The image fusion technique is able to synthesize complementary information from multimodal images into a single image.
We introduce a novel Dilated Residual Attention Network for the medical image fusion task.
arXiv Detail & Related papers (2022-12-09T04:19:43Z) - Coupled Feature Learning for Multimodal Medical Image Fusion [42.23662451234756]
Multimodal image fusion aims to combine relevant information from images acquired with different sensors.
In this paper, we propose a novel multimodal image fusion method based on coupled dictionary learning.
arXiv Detail & Related papers (2021-02-17T09:13:28Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.