Scale-free Unconstrained Online Learning for Curved Losses
- URL: http://arxiv.org/abs/2202.05630v1
- Date: Fri, 11 Feb 2022 14:10:35 GMT
- Title: Scale-free Unconstrained Online Learning for Curved Losses
- Authors: Jack J. Mayo, H\'edi Hadiji and Tim van Erven
- Abstract summary: We investigate the possibility of adapting simultaneously to the norm $U$ of the comparator and the maximum norm $G$ of the gradients.
Surprisingly, recent results show that no such price for adaptivity is needed in the specific case of $1$-Lipschitz losses.
- Score: 1.5147172044848798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A sequence of works in unconstrained online convex optimisation have
investigated the possibility of adapting simultaneously to the norm $U$ of the
comparator and the maximum norm $G$ of the gradients. In full generality,
matching upper and lower bounds are known which show that this comes at the
unavoidable cost of an additive $G U^3$, which is not needed when either $G$ or
$U$ is known in advance. Surprisingly, recent results by Kempka et al. (2019)
show that no such price for adaptivity is needed in the specific case of
$1$-Lipschitz losses like the hinge loss. We follow up on this observation by
showing that there is in fact never a price to pay for adaptivity if we
specialise to any of the other common supervised online learning losses: our
results cover log loss, (linear and non-parametric) logistic regression, square
loss prediction, and (linear and non-parametric) least-squares regression. We
also fill in several gaps in the literature by providing matching lower bounds
with an explicit dependence on $U$. In all cases we obtain scale-free
algorithms, which are suitably invariant under rescaling of the data. Our
general goal is to establish achievable rates without concern for computational
efficiency, but for linear logistic regression we also provide an adaptive
method that is as efficient as the recent non-adaptive algorithm by Agarwal et
al. (2021).
Related papers
- Convergence Rate Analysis of LION [54.28350823319057]
LION converges iterations of $cal(sqrtdK-)$ measured by gradient Karush-Kuhn-T (sqrtdK-)$.
We show that LION can achieve lower loss and higher performance compared to standard SGD.
arXiv Detail & Related papers (2024-11-12T11:30:53Z) - Optimal Multiclass U-Calibration Error and Beyond [31.959887895880765]
We consider the problem of online multiclass bounds U-calibration, where a forecaster aims to make sequential distributional predictions over $K$ classes with low U-calibration error.
We show that the optimal U-calibration error is $Theta(sqrtKT)$.
arXiv Detail & Related papers (2024-05-28T20:33:18Z) - Improving Adaptive Online Learning Using Refined Discretization [44.646191058243645]
We study unconstrained Online Linear Optimization with Lipschitz losses.
Motivated by the pursuit of instance optimality, we propose a new algorithm.
Central to these results is a continuous time approach to online learning.
arXiv Detail & Related papers (2023-09-27T21:54:52Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - One Step of Gradient Descent is Provably the Optimal In-Context Learner
with One Layer of Linear Self-Attention [31.522320487765878]
Recent works have empirically analyzed in-context learning.
One-layer transformers with linear self-attention learn to implement one step of gradient descent.
arXiv Detail & Related papers (2023-07-07T13:09:18Z) - Asymptotic Characterisation of Robust Empirical Risk Minimisation
Performance in the Presence of Outliers [18.455890316339595]
We study robust linear regression in high-dimension, when both the dimension $d$ and the number of data points $n$ diverge with a fixed ratio $alpha=n/d$, and study a data model that includes outliers.
We provide exacts for the performances of the empirical risk minimisation (ERM) using $ell$-regularised $ell$, $ell_$, and Huber losses.
arXiv Detail & Related papers (2023-05-30T12:18:39Z) - Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement
Learning: Adaptivity and Computational Efficiency [90.40062452292091]
We present the first computationally efficient algorithm for linear bandits with heteroscedastic noise.
Our algorithm is adaptive to the unknown variance of noise and achieves an $tildeO(d sqrtsum_k = 1K sigma_k2 + d)$ regret.
We also propose a variance-adaptive algorithm for linear mixture Markov decision processes (MDPs) in reinforcement learning.
arXiv Detail & Related papers (2023-02-21T00:17:24Z) - Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum
Minimization [52.25843977506935]
We propose an adaptive variance method, called AdaSpider, for $L$-smooth, non-reduction functions with a finitesum structure.
In doing so, we are able to compute an $epsilon-stationary point with $tildeOleft + st/epsilon calls.
arXiv Detail & Related papers (2022-11-03T14:41:46Z) - Adapting to Misspecification in Contextual Bandits [82.55565343668246]
We introduce a new family of oracle-efficient algorithms for $varepsilon$-misspecified contextual bandits.
We obtain the first algorithm that achieves the optimal $O(dsqrtT + varepsilonsqrtdT)$ regret bound for unknown misspecification level.
arXiv Detail & Related papers (2021-07-12T21:30:41Z) - An efficient projection neural network for $\ell_1$-regularized logistic
regression [10.517079029721257]
This paper presents a simple projection neural network for $ell_$-regularized logistics regression.
The proposed neural network does not require any extra auxiliary variable nor any smooth approximation.
We also investigate the convergence of the proposed neural network by using the Lyapunov theory and show that it converges to a solution of the problem with any arbitrary initial value.
arXiv Detail & Related papers (2021-05-12T06:13:44Z) - Fast OSCAR and OWL Regression via Safe Screening Rules [97.28167655721766]
Ordered $L_1$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning.
Proximal gradient methods are used as standard approaches to solve OWL regression.
We propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure.
arXiv Detail & Related papers (2020-06-29T23:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.