An Interpretable Ensemble Framework for Multi-Omics Dementia Biomarker Discovery Under HDLSS Conditions
- URL: http://arxiv.org/abs/2509.10527v1
- Date: Thu, 04 Sep 2025 15:20:13 GMT
- Title: An Interpretable Ensemble Framework for Multi-Omics Dementia Biomarker Discovery Under HDLSS Conditions
- Authors: Byeonghee Lee, Joonsung Kang,
- Abstract summary: We propose a novel ensemble approach combining Graph Attention Networks (GAT), MultiOmics Variational AutoEncoder (MOVE), Elastic-net sparse regression, and Storey's False Discovery Rate (FDR)<n>We evaluate performance using both simulated multi-omics data and the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset.<n>Our method demonstrates superior predictive accuracy, feature selection precision, and biological relevance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biomarker discovery in neurodegenerative diseases requires robust, interpretable frameworks capable of integrating high-dimensional multi-omics data under low-sample conditions. We propose a novel ensemble approach combining Graph Attention Networks (GAT), MultiOmics Variational AutoEncoder (MOVE), Elastic-net sparse regression, and Storey's False Discovery Rate (FDR). This framework is benchmarked against state-of-the-art methods including DIABLO, MOCAT, AMOGEL, and MOMLIN. We evaluate performance using both simulated multi-omics data and the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Our method demonstrates superior predictive accuracy, feature selection precision, and biological relevance. Biomarker gene maps derived from both datasets are visualized and interpreted, offering insights into latent molecular mechanisms underlying dementia.
Related papers
- R-GenIMA: Integrating Neuroimaging and Genetics with Interpretable Multimodal AI for Alzheimer's Disease Progression [63.97617759805451]
Early detection of Alzheimer's disease requires models capable of integrating macro-scale neuroanatomical alterations with micro-scale genetic susceptibility.<n>We introduce R-GenIMA, an interpretable multimodal large language model that couples a novel ROI-wise vision transformer with genetic prompting.<n>R-GenIMA achieves state-of-the-art performance in four-way classification across normal cognition, subjective memory concerns, mild cognitive impairment, and AD.
arXiv Detail & Related papers (2025-12-22T02:54:10Z) - A Semantically Enhanced Generative Foundation Model Improves Pathological Image Synthesis [82.01597026329158]
We introduce a Correlation-Regulated Alignment Framework for Tissue Synthesis (CRAFTS) for pathology-specific text-to-image synthesis.<n>CRAFTS incorporates a novel alignment mechanism that suppresses semantic drift to ensure biological accuracy.<n>This model generates diverse pathological images spanning 30 cancer types, with quality rigorously validated by objective metrics and pathologist evaluations.
arXiv Detail & Related papers (2025-12-15T10:22:43Z) - Adapting HFMCA to Graph Data: Self-Supervised Learning for Generalizable fMRI Representations [57.054499278843856]
Functional magnetic resonance imaging (fMRI) analysis faces significant challenges due to limited dataset sizes and domain variability between studies.<n>Traditional self-supervised learning methods inspired by computer vision often rely on positive and negative sample pairs.<n>We propose adapting a recently developed Hierarchical Functional Maximal Correlation Algorithm (HFMCA) to graph-structured fMRI data.
arXiv Detail & Related papers (2025-10-05T12:35:01Z) - Robust Multi-Omics Integration from Incomplete Modalities Significantly Improves Prediction of Alzheimer's Disease [3.5072431853663004]
MOIRA (Multi-Omics Integration with Robustness to Absent modalities) is an early integration method enabling robust learning from incomplete omics data.<n> evaluated on the Religious Order Study and Memory and Aging Project dataset for Alzheimer's Disease (AD)
arXiv Detail & Related papers (2025-09-25T07:29:46Z) - Optimizing Prognostic Biomarker Discovery in Pancreatic Cancer Through Hybrid Ensemble Feature Selection and Multi-Omics Data [4.010215468404495]
Prediction of patient survival using high-dimensional multi-omics data requires systematic feature selection methods.<n>We developed a hybrid ensemble feature selection (hEFS) approach that combines data subsampling with multiple prognostic models.<n>hEFS identifies significantly fewer and more stable biomarkers compared to the conventional, late-fusion CoxLasso models.
arXiv Detail & Related papers (2025-09-02T11:09:24Z) - Platform for Representation and Integration of multimodal Molecular Embeddings [43.54912893426355]
Existing machine learning methods for molecular embeddings are restricted to specific tasks or data modalities.<n>Existing embeddings capture largely non-overlapping molecular signals, highlighting the value of embedding integration.<n>We propose Platform for Representation and Integration of multimodal Molecular Embeddings (PRISME) to integrate heterogeneous embeddings into a unified multimodal representation.
arXiv Detail & Related papers (2025-07-10T01:18:50Z) - Clinical NLP with Attention-Based Deep Learning for Multi-Disease Prediction [44.0876796031468]
This paper addresses the challenges posed by the unstructured nature and high-dimensional semantic complexity of electronic health record texts.<n>A deep learning method based on attention mechanisms is proposed to achieve unified modeling for information extraction and multi-label disease prediction.
arXiv Detail & Related papers (2025-07-02T07:45:22Z) - Interpretable Graph Kolmogorov-Arnold Networks for Multi-Cancer Classification and Biomarker Identification using Multi-Omics Data [36.92842246372894]
Multi-Omics Graph Kolmogorov-Arnold Network (MOGKAN) is a deep learning framework that utilizes messenger-RNA, micro-RNA sequences, and DNA methylation samples.<n>By integrating multi-omics data with graph-based deep learning, our proposed approach demonstrates robust predictive performance and interpretability.
arXiv Detail & Related papers (2025-03-29T02:14:05Z) - MRGen: Segmentation Data Engine for Underrepresented MRI Modalities [59.61465292965639]
Training medical image segmentation models for rare yet clinically important imaging modalities is challenging due to the scarcity of annotated data.<n>This paper investigates leveraging generative models to synthesize data, for training segmentation models for underrepresented modalities.<n>We present MRGen, a data engine for controllable medical image synthesis conditioned on text prompts and segmentation masks.
arXiv Detail & Related papers (2024-12-04T16:34:22Z) - Multimodal Outer Arithmetic Block Dual Fusion of Whole Slide Images and Omics Data for Precision Oncology [6.418265127069878]
We propose the use of omic embeddings during early and late fusion to capture complementary information from local (patch-level) to global (slide-level) interactions.<n>This dual fusion strategy enhances interpretability and classification performance, highlighting its potential for clinical diagnostics.
arXiv Detail & Related papers (2024-11-26T13:25:53Z) - Differentiable Agent-based Epidemiology [71.81552021144589]
We introduce GradABM: a scalable, differentiable design for agent-based modeling that is amenable to gradient-based learning with automatic differentiation.
GradABM can quickly simulate million-size populations in few seconds on commodity hardware, integrate with deep neural networks and ingest heterogeneous data sources.
arXiv Detail & Related papers (2022-07-20T07:32:02Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.