SEMPAI: a Self-Enhancing Multi-Photon Artificial Intelligence for
prior-informed assessment of muscle function and pathology
- URL: http://arxiv.org/abs/2210.16273v1
- Date: Fri, 28 Oct 2022 17:03:04 GMT
- Title: SEMPAI: a Self-Enhancing Multi-Photon Artificial Intelligence for
prior-informed assessment of muscle function and pathology
- Authors: Alexander M\"uhlberg, Paul Ritter, Simon Langer, Chlo\"e Goossens,
Stefanie N\"ubler, Dominik Schneidereit, Oliver Taubmann, Felix Denzinger,
Dominik N\"orenberg, Michael Haug, Wolfgang H. Goldmann, Andreas K. Maier,
Oliver Friedrich, Lucas Kreiss
- Abstract summary: We introduce the Self-Enhancing Multi-Photon Artificial Intelligence (SEMPAI), that integrates hypothesis-driven priors in a data-driven Deep Learning approach.
SEMPAI performs joint learning of several tasks to enable prediction for small datasets.
SEMPAI outperforms state-of-the-art biomarkers in six of seven predictive tasks, including those with scarce data.
- Score: 48.54269377408277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) shows notable success in biomedical studies. However, most
DL algorithms work as a black box, exclude biomedical experts, and need
extensive data. We introduce the Self-Enhancing Multi-Photon Artificial
Intelligence (SEMPAI), that integrates hypothesis-driven priors in a
data-driven DL approach for research on multiphoton microscopy (MPM) of muscle
fibers. SEMPAI utilizes meta-learning to optimize prior integration, data
representation, and neural network architecture simultaneously. This allows
hypothesis testing and provides interpretable feedback about the origin of
biological information in MPM images. SEMPAI performs joint learning of several
tasks to enable prediction for small datasets. The method is applied on an
extensive multi-study dataset resulting in the largest joint analysis of
pathologies and function for single muscle fibers. SEMPAI outperforms
state-of-the-art biomarkers in six of seven predictive tasks, including those
with scarce data. SEMPAI's DL models with integrated priors are superior to
those without priors and to prior-only machine learning approaches.
Related papers
- RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training [55.54020926284334]
Multimodal Large Language Models (MLLMs) have recently received substantial interest, which shows their emerging potential as general-purpose models for various vision-language tasks.
Retrieval augmentation techniques have proven to be effective plugins for both LLMs and MLLMs.
In this study, we propose multimodal adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training (RA-BLIP), a novel retrieval-augmented framework for various MLLMs.
arXiv Detail & Related papers (2024-10-18T03:45:19Z) - Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology [2.7280901660033643]
This work explores the scaling properties of weakly supervised classifiers and self-supervised masked autoencoders (MAEs)
Our results show that ViT-based MAEs outperform weakly supervised classifiers on a variety of tasks, achieving as much as a 11.5% relative improvement when recalling known biological relationships curated from public databases.
We develop a new channel-agnostic MAE architecture (CA-MAE) that allows for inputting images of different numbers and orders of channels at inference time.
arXiv Detail & Related papers (2024-04-16T02:42:06Z) - Distributed Federated Learning-Based Deep Learning Model for Privacy MRI Brain Tumor Detection [11.980634373191542]
Distributed training can facilitate the processing of large medical image datasets, and improve the accuracy and efficiency of disease diagnosis.
This paper presents an innovative approach to medical image classification, leveraging Federated Learning (FL) to address the dual challenges of data privacy and efficient disease diagnosis.
arXiv Detail & Related papers (2024-04-15T09:07:19Z) - HEALNet: Multimodal Fusion for Heterogeneous Biomedical Data [10.774128925670183]
This paper presents the Hybrid Early-fusion Attention Learning Network (HEALNet), a flexible multimodal fusion architecture.
We conduct multimodal survival analysis on Whole Slide Images and Multi-omic data on four cancer datasets from The Cancer Genome Atlas (TCGA)
HEALNet achieves state-of-the-art performance compared to other end-to-end trained fusion models.
arXiv Detail & Related papers (2023-11-15T17:06:26Z) - Diagnosing Alzheimer's Disease using Early-Late Multimodal Data Fusion
with Jacobian Maps [1.5501208213584152]
Alzheimer's disease (AD) is a prevalent and debilitating neurodegenerative disorder impacting a large aging population.
We propose an efficient early-late fusion (ELF) approach, which leverages a convolutional neural network for automated feature extraction and random forests.
To tackle the challenge of detecting subtle changes in brain volume, we transform images into the Jacobian domain (JD)
arXiv Detail & Related papers (2023-10-25T19:02:57Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Multi-modal Graph Neural Network for Early Diagnosis of Alzheimer's
Disease from sMRI and PET Scans [11.420077093805382]
We propose to use graph neural networks (GNN) that are designed to deal with problems in non-Euclidean domains.
In this study, we demonstrate how brain networks can be created from sMRI or PET images.
We then present a multi-modal GNN framework where each modality has its own branch of GNN and a technique is proposed to combine the multi-modal data.
arXiv Detail & Related papers (2023-07-31T02:04:05Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.