Sparse Symmetric Tensor Regression for Functional Connectivity Analysis
- URL: http://arxiv.org/abs/2010.14700v1
- Date: Wed, 28 Oct 2020 02:07:39 GMT
- Title: Sparse Symmetric Tensor Regression for Functional Connectivity Analysis
- Authors: Da Xu
- Abstract summary: We propose a sparse symmetric tensor regression that further reduces the number of free parameters and achieves superior performance over symmetrized and ordinary CP regression.
We apply the proposed method to a study of Alzheimer's disease (AD) and normal ageing from the Berkeley Aging Cohort Study (BACS) and detect two regions of interest that have been identified important to AD.
- Score: 13.482969034243581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor regression models, such as CP regression and Tucker regression, have
many successful applications in neuroimaging analysis where the covariates are
of ultrahigh dimensionality and possess complex spatial structures. The
high-dimensional covariate arrays, also known as tensors, can be approximated
by low-rank structures and fit into the generalized linear models. The
resulting tensor regression achieves a significant reduction in dimensionality
while remaining efficient in estimation and prediction. Brain functional
connectivity is an essential measure of brain activity and has shown
significant association with neurological disorders such as Alzheimer's
disease. The symmetry nature of functional connectivity is a property that has
not been explored in previous tensor regression models. In this work, we
propose a sparse symmetric tensor regression that further reduces the number of
free parameters and achieves superior performance over symmetrized and ordinary
CP regression, under a variety of simulation settings. We apply the proposed
method to a study of Alzheimer's disease (AD) and normal ageing from the
Berkeley Aging Cohort Study (BACS) and detect two regions of interest that have
been identified important to AD.
Related papers
- Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - Deep Geometric Learning with Monotonicity Constraints for Alzheimer's
Disease Progression [8.923442084735075]
Alzheimer's disease (AD) is a devastating neurodegenerative condition that precedes progressive and irreversible dementia.
Deep learning-based approaches regarding data variability and sparsity have yet to consider inherent geometrical properties.
This study proposes a novel geometric learning approach that models longitudinal MRI biomarkers and cognitive scores.
arXiv Detail & Related papers (2023-10-05T07:14:34Z) - Bayesian longitudinal tensor response regression for modeling
neuroplasticity [0.0]
A major interest in longitudinal neuroimaging studies involves investigating voxel-level neuroplasticity due to treatment and other factors across visits.
We propose a novel Bayesian tensor response regression approach for longitudinal imaging data, which pools information across spatially-distributed voxels.
The proposed method is able to infer individual-level neuroplasticity, allowing for examination of personalized disease or recovery trajectories.
arXiv Detail & Related papers (2023-09-12T18:48:18Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - Adaptive LASSO estimation for functional hidden dynamic geostatistical
model [69.10717733870575]
We propose a novel model selection algorithm based on a penalized maximum likelihood estimator (PMLE) for functional hiddenstatistical models (f-HD)
The algorithm is based on iterative optimisation and uses an adaptive least absolute shrinkage and selector operator (GMSOLAS) penalty function, wherein the weights are obtained by the unpenalised f-HD maximum-likelihood estimators.
arXiv Detail & Related papers (2022-08-10T19:17:45Z) - Robust High-Dimensional Regression with Coefficient Thresholding and its
Application to Imaging Data Analysis [7.640041402805495]
It is of importance to develop statistical techniques to analyze high-dimensional data in the presence of both complex dependence and possible outliers in real-world imaging data.
arXiv Detail & Related papers (2021-09-30T05:29:54Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z) - The Neural Tangent Kernel in High Dimensions: Triple Descent and a
Multi-Scale Theory of Generalization [34.235007566913396]
Modern deep learning models employ considerably more parameters than required to fit the training data. Whereas conventional statistical wisdom suggests such models should drastically overfit, in practice these models generalize remarkably well.
An emerging paradigm for describing this unexpected behavior is in terms of a emphdouble descent curve.
We provide a precise high-dimensional analysis of generalization with the Neural Tangent Kernel, which characterizes the behavior of wide neural networks with gradient descent.
arXiv Detail & Related papers (2020-08-15T20:55:40Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z) - Hyperbolic Neural Networks++ [66.16106727715061]
We generalize the fundamental components of neural networks in a single hyperbolic geometry model, namely, the Poincar'e ball model.
Experiments show the superior parameter efficiency of our methods compared to conventional hyperbolic components, and stability and outperformance over their Euclidean counterparts.
arXiv Detail & Related papers (2020-06-15T08:23:20Z) - Generalisation error in learning with random features and the hidden
manifold model [23.71637173968353]
We study generalised linear regression and classification for a synthetically generated dataset.
We consider the high-dimensional regime and using the replica method from statistical physics.
We show how to obtain the so-called double descent behaviour for logistic regression with a peak at the threshold.
We discuss the role played by correlations in the data generated by the hidden manifold model.
arXiv Detail & Related papers (2020-02-21T14:49:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.