Quasi-parametric rates for Sparse Multivariate Functional Principal
Components Analysis
- URL: http://arxiv.org/abs/2212.09434v1
- Date: Mon, 19 Dec 2022 13:17:57 GMT
- Title: Quasi-parametric rates for Sparse Multivariate Functional Principal
Components Analysis
- Authors: Ryad Belhakem
- Abstract summary: We show that the eigenelements can be expressed as the solution to an optimization problem.
We establish a minimax lower bound on the mean square reconstruction error of the eigenelement, which proves that the procedure has an optimal variance in the minimax sense.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work aims to give non-asymptotic results for estimating the first
principal component of a multivariate random process. We first define the
covariance function and the covariance operator in the multivariate case. We
then define a projection operator. This operator can be seen as a
reconstruction step from the raw data in the functional data analysis context.
Next, we show that the eigenelements can be expressed as the solution to an
optimization problem, and we introduce the LASSO variant of this optimization
problem and the associated plugin estimator. Finally, we assess the estimator's
accuracy. We establish a minimax lower bound on the mean square reconstruction
error of the eigenelement, which proves that the procedure has an optimal
variance in the minimax sense.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Moreau-Yoshida Variational Transport: A General Framework For Solving Regularized Distributional Optimization Problems [3.038642416291856]
We consider a general optimization problem of minimizing a composite objective functional defined over a class probability distributions.
We propose a novel method, dubbed as Moreau-Yoshida Variational Transport (MYVT), for solving the regularized distributional optimization problem.
arXiv Detail & Related papers (2023-07-31T01:14:42Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Faithful Heteroscedastic Regression with Neural Networks [2.2835610890984164]
Parametric methods that employ neural networks for parameter maps can capture complex relationships in the data.
We make two simple modifications to optimization to produce a heteroscedastic model with mean estimates that are provably as accurate as those from its homoscedastic counterpart.
Our approach provably retains the accuracy of an equally flexible mean-only model while also offering best-in-class variance calibration.
arXiv Detail & Related papers (2022-12-18T22:34:42Z) - Data-Driven Combinatorial Optimization with Incomplete Information: a
Distributionally Robust Optimization Approach [0.0]
We analyze linear optimization problems where the cost vector is not known a priori, but is only observable through a finite data set.
The goal is to find a procedure that transforms the data set into an estimate of the expected value of the objective function.
arXiv Detail & Related papers (2021-05-28T23:17:35Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Learning Invariant Representations using Inverse Contrastive Loss [34.93395633215398]
We introduce a class of losses for learning representations that are invariant to some extraneous variable of interest.
We show that if the extraneous variable is binary, then optimizing ICL is equivalent to optimizing a regularized MMD divergence.
arXiv Detail & Related papers (2021-02-16T18:29:28Z) - Isotonic regression with unknown permutations: Statistics, computation,
and adaptation [13.96377843988598]
We study the minimax risk of estimation (in empirical $L$ loss) and the fundamental limits of adaptation (quantified by the adaptivity index)
We provide a Mirsky partition estimator that is minimax optimal while also achieving the smallest adaptivity index possible for vanilla time procedures.
In a complementary direction, we show that natural modifications of existing estimators fail to satisfy at least one of the desiderata optimal worst-case statistical performance, computational efficiency, and fast adaptation.
arXiv Detail & Related papers (2020-09-05T22:17:51Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.