Pipeline-Invariant Representation Learning for Neuroimaging
- URL: http://arxiv.org/abs/2208.12909v3
- Date: Mon, 16 Oct 2023 02:33:35 GMT
- Title: Pipeline-Invariant Representation Learning for Neuroimaging
- Authors: Xinhui Li, Alex Fedorov, Mrinal Mathur, Anees Abrol, Gregory Kiar,
Sergey Plis, Vince Calhoun
- Abstract summary: We evaluate how preprocessing pipeline selection can impact the downstream performance of a supervised learning model.
We propose two pipeline-invariant representation learning methodologies, MPSL and PXL, to improve robustness in classification performance.
These results suggest that our proposed models can be applied to mitigate pipeline-related biases, and to improve prediction robustness in brain-phenotype modeling.
- Score: 5.502218439301424
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning has been widely applied in neuroimaging, including predicting
brain-phenotype relationships from magnetic resonance imaging (MRI) volumes.
MRI data usually requires extensive preprocessing prior to modeling, but
variation introduced by different MRI preprocessing pipelines may lead to
different scientific findings, even when using the identical data. Motivated by
the data-centric perspective, we first evaluate how preprocessing pipeline
selection can impact the downstream performance of a supervised learning model.
We next propose two pipeline-invariant representation learning methodologies,
MPSL and PXL, to improve robustness in classification performance and to
capture similar neural network representations. Using 2000 human subjects from
the UK Biobank dataset, we demonstrate that proposed models present unique and
shared advantages, in particular that MPSL can be used to improve out-of-sample
generalization to new pipelines, while PXL can be used to improve within-sample
prediction performance. Both MPSL and PXL can learn more similar
between-pipeline representations. These results suggest that our proposed
models can be applied to mitigate pipeline-related biases, and to improve
prediction robustness in brain-phenotype modeling.
Related papers
- Mitigating analytical variability in fMRI results with style transfer [0.9217021281095907]
We make the assumption that pipelines used to compute fMRI statistic maps can be considered as a style component.
We propose to use different generative models, among which, Generative Adversarial Networks (GAN) and Diffusion Models (DM) to convert statistic maps across different pipelines.
arXiv Detail & Related papers (2024-04-04T07:49:39Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - Correcting Model Bias with Sparse Implicit Processes [0.9187159782788579]
We show that Sparse Implicit Processes (SIP) is capable of correcting model bias when the data generating mechanism differs strongly from the one implied by the model.
We use synthetic datasets to show that SIP is capable of providing predictive distributions that reflect the data better than the exact predictions of the initial, but wrongly assumed model.
arXiv Detail & Related papers (2022-07-21T18:00:01Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Harmonization with Flow-based Causal Inference [12.739380441313022]
This paper presents a normalizing-flow-based method to perform counterfactual inference upon a structural causal model (SCM) to harmonize medical data.
We evaluate on multiple, large, real-world medical datasets to observe that this method leads to better cross-domain generalization compared to state-of-the-art algorithms.
arXiv Detail & Related papers (2021-06-12T19:57:35Z) - Multi-Sample Online Learning for Spiking Neural Networks based on
Generalized Expectation Maximization [42.125394498649015]
Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains by processing through binary neural dynamic activations.
This paper proposes to leverage multiple compartments that sample independent spiking signals while sharing synaptic weights.
The key idea is to use these signals to obtain more accurate statistical estimates of the log-likelihood training criterion, as well as of its gradient.
arXiv Detail & Related papers (2021-02-05T16:39:42Z) - Learning Curves for Drug Response Prediction in Cancer Cell Lines [29.107984441845673]
We evaluate the data scaling properties of two neural networks (NNs) and two gradient boosting decision tree (GBDT) models trained on four drug screening datasets.
The learning curves are accurately fitted to a power law model, providing a framework for assessing the data scaling behavior of these predictors.
arXiv Detail & Related papers (2020-11-25T01:08:05Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - Ensemble Transfer Learning for the Prediction of Anti-Cancer Drug
Response [49.86828302591469]
In this paper, we apply transfer learning to the prediction of anti-cancer drug response.
We apply the classic transfer learning framework that trains a prediction model on the source dataset and refines it on the target dataset.
The ensemble transfer learning pipeline is implemented using LightGBM and two deep neural network (DNN) models with different architectures.
arXiv Detail & Related papers (2020-05-13T20:29:48Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.