Statistical Agnostic Mapping: a Framework in Neuroimaging based on
Concentration Inequalities
- URL: http://arxiv.org/abs/1912.12274v1
- Date: Fri, 27 Dec 2019 18:27:50 GMT
- Title: Statistical Agnostic Mapping: a Framework in Neuroimaging based on
Concentration Inequalities
- Authors: J M Gorriz, SiPBA Group, and CAM neuroscience
- Abstract summary: We derive a Statistical Agnostic (non-parametric) Mapping at voxel or multi-voxel level.
We propose a novel framework in neuroimaging based on concentration inequalities.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the 70s a novel branch of statistics emerged focusing its effort in
selecting a function in the pattern recognition problem, which fulfils a
definite relationship between the quality of the approximation and its
complexity. These data-driven approaches are mainly devoted to problems of
estimating dependencies with limited sample sizes and comprise all the
empirical out-of sample generalization approaches, e.g. cross validation (CV)
approaches. Although the latter are \emph{not designed for testing competing
hypothesis or comparing different models} in neuroimaging, there are a number
of theoretical developments within this theory which could be employed to
derive a Statistical Agnostic (non-parametric) Mapping (SAM) at voxel or
multi-voxel level. Moreover, SAMs could relieve i) the problem of instability
in limited sample sizes when estimating the actual risk via the CV approaches,
e.g. large error bars, and provide ii) an alternative way of Family-wise-error
(FWE) corrected p-value maps in inferential statistics for hypothesis testing.
In this sense, we propose a novel framework in neuroimaging based on
concentration inequalities, which results in (i) a rigorous development for
model validation with a small sample/dimension ratio, and (ii) a
less-conservative procedure than FWE p-value correction, to determine the brain
significance maps from the inferences made using small upper bounds of the
actual risk.
Related papers
- Risk and cross validation in ridge regression with correlated samples [72.59731158970894]
We provide training examples for the in- and out-of-sample risks of ridge regression when the data points have arbitrary correlations.
We further extend our analysis to the case where the test point has non-trivial correlations with the training set, setting often encountered in time series forecasting.
We validate our theory across a variety of high dimensional data.
arXiv Detail & Related papers (2024-08-08T17:27:29Z) - Model Free Prediction with Uncertainty Assessment [7.524024486998338]
We propose a novel framework that transforms the deep estimation paradigm into a platform conducive to conditional mean estimation.
We develop an end-to-end convergence rate for the conditional diffusion model and establish the normality of the generated samples.
Through numerical experiments, we empirically validate the efficacy of our proposed methodology.
arXiv Detail & Related papers (2024-05-21T11:19:50Z) - Scalable Bayesian inference for the generalized linear mixed model [2.45365913654612]
We introduce a statistical inference algorithm at the intersection of AI and Bayesian inference.
Our algorithm is an extension of gradient MCMC with novel contributions that address the treatment of correlated data.
We apply our algorithm to a large electronic health records database.
arXiv Detail & Related papers (2024-03-05T14:35:34Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Pseudo value-based Deep Neural Networks for Multi-state Survival
Analysis [9.659041001051415]
We propose a new class of pseudo-value-based deep learning models for multi-state survival analysis.
Our proposed models achieve state-of-the-art results under various censoring settings.
arXiv Detail & Related papers (2022-07-12T03:58:05Z) - Mitigating multiple descents: A model-agnostic framework for risk
monotonization [84.6382406922369]
We develop a general framework for risk monotonization based on cross-validation.
We propose two data-driven methodologies, namely zero- and one-step, that are akin to bagging and boosting.
arXiv Detail & Related papers (2022-05-25T17:41:40Z) - Causality and Generalizability: Identifiability and Learning Methods [0.0]
This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust prediction methods.
We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization.
We propose a general framework for distributional robustness with respect to intervention-induced distributions.
arXiv Detail & Related papers (2021-10-04T13:12:11Z) - Deep Learning in current Neuroimaging: a multivariate approach with
power and type I error control but arguable generalization ability [0.158310730488265]
A non-parametric framework is proposed that estimates the statistical significance of classifications using deep learning architectures.
A label permutation test is proposed in both studies using cross-validation (CV) and resubstitution with upper bound correction (RUB) as validation methods.
We found in the permutation test that CV and RUB methods offer a false positive rate close to the significance level and an acceptable statistical power.
arXiv Detail & Related papers (2021-03-30T21:15:39Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.