Nonparametric Trace Regression in High Dimensions via Sign Series
Representation
- URL: http://arxiv.org/abs/2105.01783v1
- Date: Tue, 4 May 2021 22:20:00 GMT
- Title: Nonparametric Trace Regression in High Dimensions via Sign Series
Representation
- Authors: Chanwoo Lee, Lexin Li, Hao Helen Zhang, and Miaoyan Wang
- Abstract summary: We develop a framework for nonparametric trace regression models via structured sign series representations of high dimensional functions.
In the context of matrix completion, our framework leads to a substantially richer model based on what we coin as the "sign rank" of a matrix.
- Score: 13.37650464374017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning of matrix-valued data has recently surged in a range of scientific
and business applications. Trace regression is a widely used method to model
effects of matrix predictors and has shown great success in matrix learning.
However, nearly all existing trace regression solutions rely on two
assumptions: (i) a known functional form of the conditional mean, and (ii) a
global low-rank structure in the entire range of the regression function, both
of which may be violated in practice. In this article, we relax these
assumptions by developing a general framework for nonparametric trace
regression models via structured sign series representations of high
dimensional functions. The new model embraces both linear and nonlinear trace
effects, and enjoys rank invariance to order-preserving transformations of the
response. In the context of matrix completion, our framework leads to a
substantially richer model based on what we coin as the "sign rank" of a
matrix. We show that the sign series can be statistically characterized by
weighted classification tasks. Based on this connection, we propose a learning
reduction approach to learn the regression model via a series of classifiers,
and develop a parallelable computation algorithm to implement sign series
aggregations. We establish the excess risk bounds, estimation error rates, and
sample complexities. Our proposal provides a broad nonparametric paradigm to
many important matrix learning problems, including matrix regression, matrix
completion, multi-task learning, and compressed sensing. We demonstrate the
advantages of our method through simulations and two applications, one on brain
connectivity study and the other on high-rank image completion.
Related papers
- Meta-Learning with Generalized Ridge Regression: High-dimensional Asymptotics, Optimality and Hyper-covariance Estimation [14.194212772887699]
We consider meta-learning within the framework of high-dimensional random-effects linear models.
We show the precise behavior of the predictive risk for a new test task when the data dimension grows proportionally to the number of samples per task.
We propose and analyze an estimator inverse random regression coefficients based on data from the training tasks.
arXiv Detail & Related papers (2024-03-27T21:18:43Z) - The Decimation Scheme for Symmetric Matrix Factorization [0.0]
Matrix factorization is an inference problem that has acquired importance due to its vast range of applications.
We study this extensive rank problem, extending the alternative 'decimation' procedure that we recently introduced.
We introduce a simple algorithm based on a ground state search that implements decimation and performs matrix factorization.
arXiv Detail & Related papers (2023-07-31T10:53:45Z) - Theoretical Characterization of the Generalization Performance of
Overfitted Meta-Learning [70.52689048213398]
This paper studies the performance of overfitted meta-learning under a linear regression model with Gaussian features.
We find new and interesting properties that do not exist in single-task linear regression.
Our analysis suggests that benign overfitting is more significant and easier to observe when the noise and the diversity/fluctuation of the ground truth of each training task are large.
arXiv Detail & Related papers (2023-04-09T20:36:13Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Robust Regularized Low-Rank Matrix Models for Regression and
Classification [14.698622796774634]
We propose a framework for matrix variate regression models based on a rank constraint, vector regularization (e.g., sparsity), and a general loss function.
We show that the algorithm is guaranteed to converge, all accumulation points of the algorithm have estimation errors in the order of $O(sqrtn)$ally and substantially attaining the minimax rate.
arXiv Detail & Related papers (2022-05-14T18:03:48Z) - Deep Learning Approach for Matrix Completion Using Manifold Learning [3.04585143845864]
This paper introduces a new latent variables model for data matrix which is a combination of linear and nonlinear models.
We design a novel deep-neural-network-based matrix completion algorithm to address both linear and nonlinear relations among entries of data matrix.
arXiv Detail & Related papers (2020-12-11T01:01:54Z) - Learning Mixtures of Low-Rank Models [89.39877968115833]
We study the problem of learning computational mixtures of low-rank models.
We develop an algorithm that is guaranteed to recover the unknown matrices with near-optimal sample.
In addition, the proposed algorithm is provably stable against random noise.
arXiv Detail & Related papers (2020-09-23T17:53:48Z) - Partial Trace Regression and Low-Rank Kraus Decomposition [9.292155894591874]
We introduce the partial-trace regression model, a family of linear mappings from matrix-valued inputs to matrix-valued outputs.
We propose a framework for learning partial trace regression models from data by taking advantage of the so-called low-rank Kraus representation of completely positive maps.
arXiv Detail & Related papers (2020-07-02T07:21:22Z) - FLAMBE: Structural Complexity and Representation Learning of Low Rank
MDPs [53.710405006523274]
This work focuses on the representation learning question: how can we learn such features?
Under the assumption that the underlying (unknown) dynamics correspond to a low rank transition matrix, we show how the representation learning question is related to a particular non-linear matrix decomposition problem.
We develop FLAMBE, which engages in exploration and representation learning for provably efficient RL in low rank transition models.
arXiv Detail & Related papers (2020-06-18T19:11:18Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.