Deep Multimodal Fusion of Data with Heterogeneous Dimensionality via
Projective Networks
- URL: http://arxiv.org/abs/2402.01311v1
- Date: Fri, 2 Feb 2024 11:03:33 GMT
- Title: Deep Multimodal Fusion of Data with Heterogeneous Dimensionality via
Projective Networks
- Authors: Jos\'e Morano and Guilherme Aresta and Christoph Grechenig and Ursula
Schmidt-Erfurth and Hrvoje Bogunovi\'c
- Abstract summary: We propose a novel deep learning-based framework for the fusion of multimodal data with heterogeneous dimensionality (e.g., 3D+2D)
The framework was validated on the following tasks: segmentation of geographic atrophy (GA), a late-stage manifestation of age-related macular degeneration, and segmentation of retinal blood vessels (RBV) in multimodal retinal imaging.
Our results show that the proposed method outperforms the state-of-the-art monomodal methods on GA and RBV segmentation by up to 3.10% and 4.64% Dice, respectively.
- Score: 4.933439602197885
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The use of multimodal imaging has led to significant improvements in the
diagnosis and treatment of many diseases. Similar to clinical practice, some
works have demonstrated the benefits of multimodal fusion for automatic
segmentation and classification using deep learning-based methods. However,
current segmentation methods are limited to fusion of modalities with the same
dimensionality (e.g., 3D+3D, 2D+2D), which is not always possible, and the
fusion strategies implemented by classification methods are incompatible with
localization tasks. In this work, we propose a novel deep learning-based
framework for the fusion of multimodal data with heterogeneous dimensionality
(e.g., 3D+2D) that is compatible with localization tasks. The proposed
framework extracts the features of the different modalities and projects them
into the common feature subspace. The projected features are then fused and
further processed to obtain the final prediction. The framework was validated
on the following tasks: segmentation of geographic atrophy (GA), a late-stage
manifestation of age-related macular degeneration, and segmentation of retinal
blood vessels (RBV) in multimodal retinal imaging. Our results show that the
proposed method outperforms the state-of-the-art monomodal methods on GA and
RBV segmentation by up to 3.10% and 4.64% Dice, respectively.
Related papers
- Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Multi-Modal Evaluation Approach for Medical Image Segmentation [4.989480853499916]
We propose a novel multi-modal evaluation (MME) approach to measure the effectiveness of different segmentation methods.
We introduce new relevant and interpretable characteristics, including detection property, boundary alignment, uniformity, total volume, and relative volume.
Our proposed approach is open-source and publicly available for use.
arXiv Detail & Related papers (2023-02-08T15:31:33Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Multiple Sclerosis Lesions Segmentation using Attention-Based CNNs in
FLAIR Images [0.2578242050187029]
Multiple Sclerosis (MS) is an autoimmune, and demyelinating disease that leads to lesions in the central nervous system.
Up to now a multitude of multimodality automatic biomedical approaches is used to segment lesions.
Authors propose a method employing just one modality (FLAIR image) to segment MS lesions accurately.
arXiv Detail & Related papers (2022-01-05T21:37:43Z) - A Unified Framework for Generalized Low-Shot Medical Image Segmentation
with Scarce Data [24.12765716392381]
We propose a unified framework for generalized low-shot (one- and few-shot) medical image segmentation based on distance metric learning (DML)
Via DML, the framework learns a multimodal mixture representation for each category, and performs dense predictions based on cosine distances between the pixels' deep embeddings and the category representations.
In our experiments on brain MRI and abdominal CT datasets, the proposed framework achieves superior performances for low-shot segmentation towards standard DNN-based (3D U-Net) and classical registration-based (ANTs) methods.
arXiv Detail & Related papers (2021-10-18T13:01:06Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Contextual Information Enhanced Convolutional Neural Networks for
Retinal Vessel Segmentation in Color Fundus Images [0.0]
An automatic retinal vessel segmentation system can effectively facilitate clinical diagnosis and ophthalmological research.
A deep learning based method has been proposed and several customized modules have been integrated into the well-known encoder-decoder architecture U-net.
As a result, the proposed method outperforms the work of predecessors and achieves state-of-the-art performance in Sensitivity/Recall, F1-score and MCC.
arXiv Detail & Related papers (2021-03-25T06:10:47Z) - Max-Fusion U-Net for Multi-Modal Pathology Segmentation with Attention
and Dynamic Resampling [13.542898009730804]
The performance of relevant algorithms is significantly affected by the proper fusion of the multi-modal information.
We present the Max-Fusion U-Net that achieves improved pathology segmentation performance.
We evaluate our methods using the Myocardial pathology segmentation (MyoPS) combining the multi-sequence CMR dataset.
arXiv Detail & Related papers (2020-09-05T17:24:23Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.