Modality-Agnostic Style Transfer for Holistic Feature Imputation
- URL: http://arxiv.org/abs/2503.02898v1
- Date: Mon, 03 Mar 2025 07:09:24 GMT
- Title: Modality-Agnostic Style Transfer for Holistic Feature Imputation
- Authors: Seunghun Baek, Jaeyoon Sim, Mustafa Dere, Minjeong Kim, Guorong Wu, Won Hwa Kim,
- Abstract summary: We propose a framework that generates unobserved imaging measures for specific subjects using their existing measures.<n>Our framework transfers modality-specific style while preserving AD-specific content.<n>This is done by domain adversarial training that preserves modality-agnostic but AD-specific information, while a generative adversarial network adds an indistinguishable modality-specific style.
- Score: 5.62583732398926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Characterizing a preclinical stage of Alzheimer's Disease (AD) via single imaging is difficult as its early symptoms are quite subtle. Therefore, many neuroimaging studies are curated with various imaging modalities, e.g., MRI and PET, however, it is often challenging to acquire all of them from all subjects and missing data become inevitable. In this regards, in this paper, we propose a framework that generates unobserved imaging measures for specific subjects using their existing measures, thereby reducing the need for additional examinations. Our framework transfers modality-specific style while preserving AD-specific content. This is done by domain adversarial training that preserves modality-agnostic but AD-specific information, while a generative adversarial network adds an indistinguishable modality-specific style. Our proposed framework is evaluated on the Alzheimer's Disease Neuroimaging Initiative (ADNI) study and compared with other imputation methods in terms of generated data quality. Small average Cohen's $d$ $< 0.19$ between our generated measures and real ones suggests that the synthetic data are practically usable regardless of their modality type.
Related papers
- PGAD: Prototype-Guided Adaptive Distillation for Multi-Modal Learning in AD Diagnosis [4.455792848101014]
Missing modalities pose a major issue in Alzheimer's Disease (AD) diagnosis.
Most existing methods train only on complete data, ignoring the large proportion of incomplete samples in real-world datasets like ADNI.
We propose a Prototype-Guided Adaptive Distillation (PGAD) framework that directly incorporates incomplete multi-modal data into training.
arXiv Detail & Related papers (2025-03-05T14:39:31Z) - OCL: Ordinal Contrastive Learning for Imputating Features with Progressive Labels [4.434835769977399]
We introduce a holistic imaging feature imputation method that enables to leverage diverse imaging features while retaining all subjects.<n>The proposed method promotes our holistic imaging feature imputation across various modalities in the shared embedding space.<n>In the experiments, we show that our networks deliver favorable results for statistical analysis and classification against imputation baselines.
arXiv Detail & Related papers (2025-03-03T07:23:29Z) - Toward Robust Early Detection of Alzheimer's Disease via an Integrated Multimodal Learning Approach [5.9091823080038814]
Alzheimer's Disease (AD) is a complex neurodegenerative disorder marked by memory loss, executive dysfunction, and personality changes.<n>This study introduces an advanced multimodal classification model that integrates clinical, cognitive, neuroimaging, and EEG data.
arXiv Detail & Related papers (2024-08-29T08:26:00Z) - GFE-Mamba: Mamba-based AD Multi-modal Progression Assessment via Generative Feature Extraction from MCI [5.834776094182435]
Alzheimer's Disease (AD) is a progressive, irreversible neurodegenerative disorder that often originates from Mild Cognitive Impairment (MCI)<n>The GFE-Mamba model effectively predicts the progression from MCI to AD and surpasses several leading methods in the field.
arXiv Detail & Related papers (2024-07-22T15:22:33Z) - Diagnosing Alzheimer's Disease using Early-Late Multimodal Data Fusion
with Jacobian Maps [1.5501208213584152]
Alzheimer's disease (AD) is a prevalent and debilitating neurodegenerative disorder impacting a large aging population.
We propose an efficient early-late fusion (ELF) approach, which leverages a convolutional neural network for automated feature extraction and random forests.
To tackle the challenge of detecting subtle changes in brain volume, we transform images into the Jacobian domain (JD)
arXiv Detail & Related papers (2023-10-25T19:02:57Z) - Genetic InfoMax: Exploring Mutual Information Maximization in
High-Dimensional Imaging Genetics Studies [50.11449968854487]
Genome-wide association studies (GWAS) are used to identify relationships between genetic variations and specific traits.
Representation learning for imaging genetics is largely under-explored due to the unique challenges posed by GWAS.
We introduce a trans-modal learning framework Genetic InfoMax (GIM) to address the specific challenges of GWAS.
arXiv Detail & Related papers (2023-09-26T03:59:21Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - FedMed-GAN: Federated Domain Translation on Unsupervised Cross-Modality
Brain Image Synthesis [55.939957482776194]
We propose a new benchmark for federated domain translation on unsupervised brain image synthesis (termed as FedMed-GAN)
FedMed-GAN mitigates the mode collapse without sacrificing the performance of generators.
A comprehensive evaluation is provided for comparing FedMed-GAN and other centralized methods.
arXiv Detail & Related papers (2022-01-22T02:50:29Z) - Learn to Ignore: Domain Adaptation for Multi-Site MRI Analysis [1.3079444139643956]
We present a novel method that learns to ignore the scanner-related features present in the images, while learning features relevant for the classification task.
Our method outperforms state-of-the-art domain adaptation methods on a classification task between Multiple Sclerosis patients and healthy subjects.
arXiv Detail & Related papers (2021-10-13T15:40:50Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z) - Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using
Deep Multiple-Instance Learning [59.74684475991192]
Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old.
PD symptoms include tremor, rigidity and braykinesia.
We present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device.
arXiv Detail & Related papers (2020-05-06T09:02:30Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.