MvHo-IB: Multi-View Higher-Order Information Bottleneck for Brain Disorder Diagnosis
- URL: http://arxiv.org/abs/2507.02847v1
- Date: Thu, 03 Jul 2025 17:54:03 GMT
- Title: MvHo-IB: Multi-View Higher-Order Information Bottleneck for Brain Disorder Diagnosis
- Authors: Kunyu Zhang, Qiang Li, Shujian Yu,
- Abstract summary: MvHo-IB is a novel multi-view learning framework that integrates both pairwise interactions and HOIs for diagnostic decision-making.<n>MvHo-IB achieves state-of-the-art performance, significantly outperforming previous methods.
- Score: 22.172213148620166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent evidence suggests that modeling higher-order interactions (HOIs) in functional magnetic resonance imaging (fMRI) data can enhance the diagnostic accuracy of machine learning systems. However, effectively extracting and utilizing HOIs remains a significant challenge. In this work, we propose MvHo-IB, a novel multi-view learning framework that integrates both pairwise interactions and HOIs for diagnostic decision-making, while automatically compressing task-irrelevant redundant information. MvHo-IB introduces several key innovations: (1) a principled method that combines O-information from information theory with a matrix-based Renyi alpha-order entropy estimator to quantify and extract HOIs, (2) a purpose-built Brain3DCNN encoder to effectively utilize these interactions, and (3) a new multi-view learning information bottleneck objective to enhance representation learning. Experiments on three benchmark fMRI datasets demonstrate that MvHo-IB achieves state-of-the-art performance, significantly outperforming previous methods, including recent hypergraph-based techniques. The implementation of MvHo-IB is available at https://github.com/zky04/MvHo-IB.
Related papers
- Hierarchical Deep Feature Fusion and Ensemble Learning for Enhanced Brain Tumor MRI Classification [3.776159955137874]
The framework incorporates comprehensive preprocessing and data augmentation of brain magnetic resonance images (MRI)<n>The novelty lies in the dual-level ensembling strategy: feature-level ensembling, and classifier-level ensembling.<n> Experiments on two public Kaggle MRI brain tumor datasets demonstrate that this approach significantly surpasses state-of-the-art methods.
arXiv Detail & Related papers (2025-06-14T05:53:54Z) - Efficient Bilinear Attention-based Fusion for Medical Visual Question Answering [3.7133600776119136]
Medical Visual Question Answering (MedVQA) has attracted growing interest at the intersection of medical image understanding and natural language processing for clinical applications.<n>We introduce a fusion model, OMniBAN, that integrates Orthogonality loss, Multi-head attention, and a Bilinear Attention Network to achieve high computational efficiency as well as solid performance.
arXiv Detail & Related papers (2024-10-28T13:24:12Z) - Machine Learning on Dynamic Functional Connectivity: Promise, Pitfalls, and Interpretations [7.013079422694949]
We seek to establish a well-founded empirical guideline for designing deep models for functional neuroimages.
We put the spotlight on (1) What is the current state-of-the-arts (SOTA) performance in cognitive task recognition and disease diagnosis using fMRI?
We have conducted a comprehensive evaluation and statistical analysis, in various settings, to answer the above outstanding questions.
arXiv Detail & Related papers (2024-09-17T17:24:17Z) - NeuralOOD: Improving Out-of-Distribution Generalization Performance with Brain-machine Fusion Learning Framework [13.25912138698749]
We propose a novel Brain-machine Fusion Learning framework to fuse visual knowledge from CV model and prior cognitive knowledge from the human brain.
We employ a pre-trained visual neural encoding model to predict the functional Magnetic Resonance Imaging (fMRI) from visual features.
Our model outperforms the DINOv2 and baseline models on the ImageNet-1k validation dataset as well as six curated OOD datasets.
arXiv Detail & Related papers (2024-08-27T10:54:37Z) - Predicting Infant Brain Connectivity with Federated Multi-Trajectory
GNNs using Scarce Data [54.55126643084341]
Existing deep learning solutions suffer from three major limitations.
We introduce FedGmTE-Net++, a federated graph-based multi-trajectory evolution network.
Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets.
arXiv Detail & Related papers (2024-01-01T10:20:01Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Learning Binary Semantic Embedding for Histology Image Classification
and Retrieval [56.34863511025423]
We propose a novel method for Learning Binary Semantic Embedding (LBSE)
Based on the efficient and effective embedding, classification and retrieval are performed to provide interpretable computer-assisted diagnosis for histology images.
Experiments conducted on three benchmark datasets validate the superiority of LBSE under various scenarios.
arXiv Detail & Related papers (2020-10-07T08:36:44Z) - Deep brain state classification of MEG data [2.9048924265579124]
This paper uses Magnetoencephalography (MEG) data, provided by the Human Connectome Project (HCP), in combination with various deep artificial neural network models to perform brain decoding.
arXiv Detail & Related papers (2020-07-02T05:51:57Z) - Few-Shot Relation Learning with Attention for EEG-based Motor Imagery
Classification [11.873435088539459]
Brain-Computer Interfaces (BCI) based on Electroencephalography (EEG) signals have received a lot of attention.
Motor imagery (MI) data can be used to aid rehabilitation as well as in autonomous driving scenarios.
classification of MI signals is vital for EEG-based BCI systems.
arXiv Detail & Related papers (2020-03-03T02:34:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.