Adaptive Online Estimation of Piecewise Polynomial Trends
- URL: http://arxiv.org/abs/2010.00073v1
- Date: Wed, 30 Sep 2020 19:30:28 GMT
- Title: Adaptive Online Estimation of Piecewise Polynomial Trends
- Authors: Dheeraj Baby and Yu-Xiang Wang
- Abstract summary: We consider the framework of non-stationary optimization with squared error losses and noisy gradient feedback.
Motivated from the theory of non-parametric regression, we introduce a new variational constraint.
We show that the same policy is minimax optimal for several other non-parametric families of interest.
- Score: 23.91519151164528
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We consider the framework of non-stationary stochastic optimization [Besbes
et al, 2015] with squared error losses and noisy gradient feedback where the
dynamic regret of an online learner against a time varying comparator sequence
is studied. Motivated from the theory of non-parametric regression, we
introduce a new variational constraint that enforces the comparator sequence to
belong to a discrete $k^{th}$ order Total Variation ball of radius $C_n$. This
variational constraint models comparators that have piece-wise polynomial
structure which has many relevant practical applications [Tibshirani, 2014]. By
establishing connections to the theory of wavelet based non-parametric
regression, we design a polynomial time algorithm that achieves the nearly
optimal dynamic regret of $\tilde{O}(n^{\frac{1}{2k+3}}C_n^{\frac{2}{2k+3}})$.
The proposed policy is adaptive to the unknown radius $C_n$. Further, we show
that the same policy is minimax optimal for several other non-parametric
families of interest.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data [17.657917523817243]
We develop and analyze algorithms for instrumental variable regression by viewing the problem as a conditional optimization problem.
In the context of least-squares instrumental variable regression, our algorithms neither require matrix inversions nor mini-batches.
We derive rates of convergence in expectation, that are of order $mathcalO(log T/T)$ and $mathcalO (1/T1-iota)$ for any $iota>0$.
arXiv Detail & Related papers (2024-05-29T19:21:55Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Improving Adaptive Online Learning Using Refined Discretization [44.646191058243645]
We study unconstrained Online Linear Optimization with Lipschitz losses.
Motivated by the pursuit of instance optimality, we propose a new algorithm.
Central to these results is a continuous time approach to online learning.
arXiv Detail & Related papers (2023-09-27T21:54:52Z) - KL-Entropy-Regularized RL with a Generative Model is Minimax Optimal [70.15267479220691]
We consider and analyze the sample complexity of model reinforcement learning with a generative variance-free model.
Our analysis shows that it is nearly minimax-optimal for finding an $varepsilon$-optimal policy when $varepsilon$ is sufficiently small.
arXiv Detail & Related papers (2022-05-27T19:39:24Z) - Single Trajectory Nonparametric Learning of Nonlinear Dynamics [8.438421942654292]
Given a single trajectory of a dynamical system, we analyze the performance of the nonparametric least squares estimator (LSE)
We leverage recently developed information-theoretic methods to establish the optimality of the LSE for non hypotheses classes.
We specialize our results to a number of scenarios of practical interest, such as Lipschitz dynamics, generalized linear models, and dynamics described by functions in certain classes of Reproducing Kernel Hilbert Spaces (RKHS)
arXiv Detail & Related papers (2022-02-16T19:38:54Z) - $\ell_1$-norm constrained multi-block sparse canonical correlation
analysis via proximal gradient descent [0.0]
We propose a proximal gradient descent algorithm to solve the multi-block CCA problem.
We show that the resulting estimate is rate-optimal under suitable assumptions.
We also describe an easy-to-implement deflation procedure to estimate multiple eigenvectors sequentially.
arXiv Detail & Related papers (2022-01-14T03:35:01Z) - Online nonparametric regression with Sobolev kernels [99.12817345416846]
We derive the regret upper bounds on the classes of Sobolev spaces $W_pbeta(mathcalX)$, $pgeq 2, beta>fracdp$.
The upper bounds are supported by the minimax regret analysis, which reveals that in the cases $beta> fracd2$ or $p=infty$ these rates are (essentially) optimal.
arXiv Detail & Related papers (2021-02-06T15:05:14Z) - A polynomial-time algorithm for learning nonparametric causal graphs [18.739085486953698]
The analysis is model-free and does not assume linearity, additivity, independent noise, or faithfulness.
We impose a condition on the residual variances that is closely related to previous work on linear models with equal variances.
arXiv Detail & Related papers (2020-06-22T02:21:53Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z) - Is Temporal Difference Learning Optimal? An Instance-Dependent Analysis [102.29671176698373]
We address the problem of policy evaluation in discounted decision processes, and provide Markov-dependent guarantees on the $ell_infty$error under a generative model.
We establish both and non-asymptotic versions of local minimax lower bounds for policy evaluation, thereby providing an instance-dependent baseline by which to compare algorithms.
arXiv Detail & Related papers (2020-03-16T17:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.