CP Degeneracy in Tensor Regression
- URL: http://arxiv.org/abs/2010.13568v1
- Date: Thu, 22 Oct 2020 16:08:44 GMT
- Title: CP Degeneracy in Tensor Regression
- Authors: Ya Zhou, Raymond K. W. Wong and Kejun He
- Abstract summary: We show that CANDECOMP/PARAFAC (CP) low-rank constraints are often imposed on the coefficient parameter in the (penalized) $M$-estimation.
This is closely related to a phenomenon, called CP degeneracy, in low-rank approximation problems.
- Score: 11.193867567895353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor linear regression is an important and useful tool for analyzing tensor
data. To deal with high dimensionality, CANDECOMP/PARAFAC (CP) low-rank
constraints are often imposed on the coefficient tensor parameter in the
(penalized) $M$-estimation. However, we show that the corresponding
optimization may not be attainable, and when this happens, the estimator is not
well-defined. This is closely related to a phenomenon, called CP degeneracy, in
low-rank tensor approximation problems. In this article, we provide useful
results of CP degeneracy in tensor regression problems. In addition, we provide
a general penalized strategy as a solution to overcome CP degeneracy. The
asymptotic properties of the resulting estimation are also studied. Numerical
experiments are conducted to illustrate our findings.
Related papers
- Risk and cross validation in ridge regression with correlated samples [72.59731158970894]
We provide training examples for the in- and out-of-sample risks of ridge regression when the data points have arbitrary correlations.
We further extend our analysis to the case where the test point has non-trivial correlations with the training set, setting often encountered in time series forecasting.
We validate our theory across a variety of high dimensional data.
arXiv Detail & Related papers (2024-08-08T17:27:29Z) - Computational and Statistical Guarantees for Tensor-on-Tensor Regression with Tensor Train Decomposition [27.29463801531576]
We study the theoretical and algorithmic aspects of the TT-based ToT regression model.
We propose two algorithms to efficiently find solutions to constrained error bounds.
We establish the linear convergence rate of both IHT and RGD.
arXiv Detail & Related papers (2024-06-10T03:51:38Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Off-policy estimation of linear functionals: Non-asymptotic theory for
semi-parametric efficiency [59.48096489854697]
The problem of estimating a linear functional based on observational data is canonical in both the causal inference and bandit literatures.
We prove non-asymptotic upper bounds on the mean-squared error of such procedures.
We establish its instance-dependent optimality in finite samples via matching non-asymptotic local minimax lower bounds.
arXiv Detail & Related papers (2022-09-26T23:50:55Z) - Tensor Recovery Based on A Novel Non-convex Function Minimax Logarithmic
Concave Penalty Function [5.264776812468168]
In this paper, we propose a new non-arithmic solution, Miniarithmic Concave Penalty (MLCP) function.
The proposed function is generalized to cases, weighted to $LLoja.
It is proved that the proposed sequence has finite length and converges to the critical point globally.
arXiv Detail & Related papers (2022-06-25T12:26:53Z) - On Convergence of Training Loss Without Reaching Stationary Points [62.41370821014218]
We show that Neural Network weight variables do not converge to stationary points where the gradient the loss function vanishes.
We propose a new perspective based on ergodic theory dynamical systems.
arXiv Detail & Related papers (2021-10-12T18:12:23Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z) - An Optimal Statistical and Computational Framework for Generalized
Tensor Estimation [10.899518267165666]
This paper describes a flexible framework for low-rank tensor estimation problems.
It includes many important instances from applications in computational imaging, genomics, and network analysis.
arXiv Detail & Related papers (2020-02-26T01:54:35Z) - Partially Observed Dynamic Tensor Response Regression [17.930417764563106]
In modern data science, dynamic tensor data is prevailing in numerous applications.
We develop a regression model with partially observed dynamic tensor sparsity as a predictor.
We illustrate the efficacy of our proposed method using simulations, and two real applications.
arXiv Detail & Related papers (2020-02-22T17:14:10Z) - On Recoverability of Randomly Compressed Tensors with Low CP Rank [29.00634848772122]
We show that if the number of measurements is on the same order of magnitude as that of the model parameters, then the tensor is recoverable.
Our proof is based on deriving a textitrestricted isometry property (R.I.P.) under the CPD model via set covering techniques.
arXiv Detail & Related papers (2020-01-08T04:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.