A Bi-Pyramid Multimodal Fusion Method for the Diagnosis of Bipolar
Disorders
- URL: http://arxiv.org/abs/2401.07571v1
- Date: Mon, 15 Jan 2024 10:11:19 GMT
- Title: A Bi-Pyramid Multimodal Fusion Method for the Diagnosis of Bipolar
Disorders
- Authors: Guoxin Wang, Sheng Shi, Shan An, Fengmei Fan, Wenshu Ge, Qi Wang, Feng
Yu, Zhiren Wang
- Abstract summary: We utilize both MRIs and fMRI data and propose a multimodal diagnosis model for bipolar disorder.
Our proposed method outperforms others in balanced accuracy from 0.657 to 0.732 on the OpenfMRI dataset.
- Score: 11.622160966334745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous research on the diagnosis of Bipolar disorder has mainly focused on
resting-state functional magnetic resonance imaging. However, their accuracy
can not meet the requirements of clinical diagnosis. Efficient multimodal
fusion strategies have great potential for applications in multimodal data and
can further improve the performance of medical diagnosis models. In this work,
we utilize both sMRI and fMRI data and propose a novel multimodal diagnosis
model for bipolar disorder. The proposed Patch Pyramid Feature Extraction
Module extracts sMRI features, and the spatio-temporal pyramid structure
extracts the fMRI features. Finally, they are fused by a fusion module to
output diagnosis results with a classifier. Extensive experiments show that our
proposed method outperforms others in balanced accuracy from 0.657 to 0.732 on
the OpenfMRI dataset, and achieves the state of the art.
Related papers
- DiaMond: Dementia Diagnosis with Multi-Modal Vision Transformers Using MRI and PET [9.229658208994675]
We propose a novel framework, DiaMond, to integrate MRI and PET.
DiaMond is equipped with self-attention and a novel bi-attention mechanism that synergistically combine MRI and PET.
It significantly outperforms existing multi-modal methods across various datasets.
arXiv Detail & Related papers (2024-10-30T17:11:00Z) - Cross-Vendor Reproducibility of Radiomics-based Machine Learning Models for Computer-aided Diagnosis [0.0]
We aim to enhance clinical decision support through multimodal learning and feature fusion.
Our SVM model, utilizing combined features from Pyradiomics and MRCradiomics, achieved an AUC of 0.74 on the Multi-Improd dataset.
RF model showed notable robustness for models using Pyradiomics features alone (AUC of 0.78 on Philips)
arXiv Detail & Related papers (2024-07-25T14:16:02Z) - An Interpretable Cross-Attentive Multi-modal MRI Fusion Framework for Schizophrenia Diagnosis [46.58592655409785]
We propose a novel Cross-Attentive Multi-modal Fusion framework (CAMF) to capture both intra-modal and inter-modal relationships between fMRI and sMRI.
Our approach significantly improves classification accuracy, as demonstrated by our evaluations on two extensive multi-modal brain imaging datasets.
The gradient-guided Score-CAM is applied to interpret critical functional networks and brain regions involved in schizophrenia.
arXiv Detail & Related papers (2024-03-29T20:32:30Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Unsupervised Anomaly Detection using Aggregated Normative Diffusion [46.24703738821696]
Unsupervised anomaly detection has the potential to identify a broader spectrum of anomalies.
Existing state-of-the-art UAD approaches do not generalise well to diverse types of anomalies.
We introduce a new UAD method named Aggregated Normative Diffusion (ANDi)
arXiv Detail & Related papers (2023-12-04T14:02:56Z) - Multi-Dimension-Embedding-Aware Modality Fusion Transformer for
Psychiatric Disorder Clasification [13.529183496842819]
We construct a deep learning architecture that takes as input 2D time series of rs-fMRI and 3D volumes T1w.
We show that our proposed MFFormer performs better than that using a single modality or multi-modality MRI on schizophrenia and bipolar disorder diagnosis.
arXiv Detail & Related papers (2023-10-04T10:02:04Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - MUVF-YOLOX: A Multi-modal Ultrasound Video Fusion Network for Renal
Tumor Diagnosis [10.452919030855796]
We propose a novel multi-modal ultrasound video fusion network that can effectively perform multi-modal feature fusion and video classification for renal tumor diagnosis.
Experimental results on a multicenter dataset show that the proposed framework outperforms the single-modal models and the competing methods.
arXiv Detail & Related papers (2023-07-15T14:15:42Z) - Multiple Time Series Fusion Based on LSTM An Application to CAP A Phase
Classification Using EEG [56.155331323304]
Deep learning based electroencephalogram channels' feature level fusion is carried out in this work.
Channel selection, fusion, and classification procedures were optimized by two optimization algorithms.
arXiv Detail & Related papers (2021-12-18T14:17:49Z) - Deep Learning based Multi-modal Computing with Feature Disentanglement
for MRI Image Synthesis [8.363448006582065]
We propose a deep learning based multi-modal computing model for MRI synthesis with feature disentanglement strategy.
The proposed approach decomposes each input modality into modality-invariant space with shared information and modality-specific space with specific information.
To address the lack of specific information of the target modality in the test phase, a local adaptive fusion (LAF) module is adopted to generate a modality-like pseudo-target.
arXiv Detail & Related papers (2021-05-06T17:22:22Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.