Leveraging both Lesion Features and Procedural Bias in Neuroimaging: An
Dual-Task Split dynamics of inverse scale space
- URL: http://arxiv.org/abs/2007.08740v1
- Date: Fri, 17 Jul 2020 03:41:48 GMT
- Title: Leveraging both Lesion Features and Procedural Bias in Neuroimaging: An
Dual-Task Split dynamics of inverse scale space
- Authors: Xinwei Sun, Wenjing Han, Lingjing Hu, Yuan Yao, Yizhou Wang
- Abstract summary: The prediction and selection of lesion features are two important tasks in voxel-based neuroimage analysis.
In this paper, we propose that the features/voxels in neuroimage data are consist of three parts: lesion features, procedural bias, and null features.
To stably select lesion features and leverage procedural bias into prediction, we propose an iterative algorithm (termed GSplit LBI)
The validity and the benefit of our model can be shown by the improvement of prediction results and the interpretability of visualized procedural bias and lesion features.
- Score: 21.05070956384346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prediction and selection of lesion features are two important tasks in
voxel-based neuroimage analysis. Existing multivariate learning models take two
tasks equivalently and optimize simultaneously. However, in addition to lesion
features, we observe that there is another type of feature, which is commonly
introduced during the procedure of preprocessing steps, which can improve the
prediction result. We call such a type of feature as procedural bias.
Therefore, in this paper, we propose that the features/voxels in neuroimage
data are consist of three orthogonal parts: lesion features, procedural bias,
and null features. To stably select lesion features and leverage procedural
bias into prediction, we propose an iterative algorithm (termed GSplit LBI) as
a discretization of differential inclusion of inverse scale space, which is the
combination of Variable Splitting scheme and Linearized Bregman Iteration
(LBI). Specifically, with a variable the splitting term, two estimators are
introduced and split apart, i.e. one is for feature selection (the sparse
estimator) and the other is for prediction (the dense estimator). Implemented
with Linearized Bregman Iteration (LBI), the solution path of both estimators
can be returned with different sparsity levels on the sparse estimator for the
selection of lesion features. Besides, the dense the estimator can additionally
leverage procedural bias to further improve prediction results. To test the
efficacy of our method, we conduct experiments on the simulated study and
Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The validity and
the benefit of our model can be shown by the improvement of prediction results
and the interpretability of visualized procedural bias and lesion features.
Related papers
- Embracing Uncertainty Flexibility: Harnessing a Supervised Tree Kernel
to Empower Ensemble Modelling for 2D Echocardiography-Based Prediction of
Right Ventricular Volume [0.5492530316344587]
The right ventricular (RV) function deterioration strongly predicts clinical outcomes in numerous circumstances.
We propose to complement the volume predictions with uncertainty scores.
The proposed framework can be used to enhance the decision-making process and reduce risks.
arXiv Detail & Related papers (2024-03-04T12:36:31Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Coordinated Double Machine Learning [8.808993671472349]
This paper argues that a carefully coordinated learning algorithm for deep neural networks may reduce the estimation bias.
The improved empirical performance of the proposed method is demonstrated through numerical experiments on both simulated and real data.
arXiv Detail & Related papers (2022-06-02T05:56:21Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Jump Interval-Learning for Individualized Decision Making [21.891586204541877]
We propose a jump interval-learning to develop an individualized interval-valued decision rule (I2DR)
Unlike IDRs that recommend a single treatment, the proposed I2DR yields an interval of treatment options for each individual.
arXiv Detail & Related papers (2021-11-17T03:29:59Z) - The Bias-Variance Tradeoff of Doubly Robust Estimator with Targeted
$L_1$ regularized Neural Networks Predictions [0.0]
The Doubly Robust (DR) estimation of ATE can be carried out in 2 steps, where in the first step, the treatment and outcome are modeled, and in the second step the predictions are inserted into the DR estimator.
The model misspecification in the first step has led researchers to utilize Machine Learning algorithms instead of parametric algorithms.
arXiv Detail & Related papers (2021-08-02T15:41:27Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - A Twin Neural Model for Uplift [59.38563723706796]
Uplift is a particular case of conditional treatment effect modeling.
We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk.
We show our proposed method is competitive with the state-of-the-art in simulation setting and on real data from large scale randomized experiments.
arXiv Detail & Related papers (2021-05-11T16:02:39Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Supervised Autoencoders Learn Robust Joint Factor Models of Neural
Activity [2.8402080392117752]
neuroscience applications collect high-dimensional predictors' corresponding to brain activity in different regions along with behavioral outcomes.
Joint factor models for the predictors and outcomes are natural, but maximum likelihood estimates of these models can struggle in practice when there is model misspecification.
We propose an alternative inference strategy based on supervised autoencoders; rather than placing a probability distribution on the latent factors, we define them as an unknown function of the high-dimensional predictors.
arXiv Detail & Related papers (2020-04-10T19:31:57Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.