Support recovery and sup-norm convergence rates for sparse pivotal
estimation
- URL: http://arxiv.org/abs/2001.05401v3
- Date: Thu, 3 Sep 2020 16:58:48 GMT
- Title: Support recovery and sup-norm convergence rates for sparse pivotal
estimation
- Authors: Mathurin Massias and Quentin Bertrand and Alexandre Gramfort and
Joseph Salmon
- Abstract summary: In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
- Score: 79.13844065776928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In high dimensional sparse regression, pivotal estimators are estimators for
which the optimal regularization parameter is independent of the noise level.
The canonical pivotal estimator is the square-root Lasso, formulated along with
its derivatives as a "non-smooth + non-smooth" optimization problem. Modern
techniques to solve these include smoothing the datafitting term, to benefit
from fast efficient proximal algorithms. In this work we show minimax sup-norm
convergence rates for non smoothed and smoothed, single task and multitask
square-root Lasso-type estimators. Thanks to our theoretical analysis, we
provide some guidelines on how to set the smoothing hyperparameter, and
illustrate on synthetic data the interest of such guidelines.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Stochastic Mirror Descent for Large-Scale Sparse Recovery [13.500750042707407]
We discuss an application of quadratic Approximation to statistical estimation of high-dimensional sparse parameters.
We show that the proposed algorithm attains the optimal convergence of the estimation error under weak assumptions on the regressor distribution.
arXiv Detail & Related papers (2022-10-23T23:23:23Z) - Distributed Estimation and Inference for Semi-parametric Binary Response Models [8.309294338998539]
This paper studies the maximum score estimator of a semi-parametric binary choice model under a distributed computing environment.
An intuitive divide-and-conquer estimator is computationally expensive and restricted by a non-regular constraint on the number of machines.
arXiv Detail & Related papers (2022-10-15T23:06:46Z) - Adaptive Zeroth-Order Optimisation of Nonconvex Composite Objectives [1.7640556247739623]
We analyze algorithms for zeroth-order entropy composite objectives, focusing on dependence on dimensionality.
This is achieved by exploiting low dimensional structure of the decision set using the mirror descent method with an estimation alike function.
To improve the gradient, we replace the classic sampling method based on Rademacher and show that the mini-batch method copes with non-Eucli geometry.
arXiv Detail & Related papers (2022-08-09T07:36:25Z) - Non-Convex Optimization with Certificates and Fast Rates Through Kernel
Sums of Squares [68.8204255655161]
We consider potentially non- optimized approximation problems.
In this paper, we propose an algorithm that achieves close to optimal a priori computational guarantees.
arXiv Detail & Related papers (2022-04-11T09:37:04Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Adaptive estimation of a function from its Exponential Radon Transform
in presence of noise [0.0]
We propose a locally adaptive strategy for estimating a function from its Exponential Radon Transform (ERT) data.
We build a non-parametric kernel type estimator and show that for a class of functions comprising a wide Sobolev regularity scale, our proposed strategy follows the minimax optimal rate up to a $logn$ factor.
arXiv Detail & Related papers (2020-11-13T12:54:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.