Worst-case Error Bounds for Online Learning of Smooth Functions
- URL: http://arxiv.org/abs/2502.16388v1
- Date: Sun, 23 Feb 2025 00:43:10 GMT
- Title: Worst-case Error Bounds for Online Learning of Smooth Functions
- Authors: Weian Xie,
- Abstract summary: We investigate worst-case error for the online learning of real functions that have certain smoothness constraints.<n>We prove for all $p, q ge 2$ and $eta ge 1$ that $operatornameopttextnf_p, eta (mathcalF_q)$ is finite.<n>We also define a noisy model for the online learning of smooth functions, where the learner may receive incorrect feedback up to $eta ge 1$ times.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online learning is a model of machine learning where the learner is trained on sequential feedback. We investigate worst-case error for the online learning of real functions that have certain smoothness constraints. Suppose that $\mathcal{F}_q$ is the class of all absolutely continuous functions $f: [0, 1] \rightarrow \mathbb{R}$ such that $\|f'\|_q \le 1$, and $\operatorname{opt}_p(\mathcal{F}_q)$ is the best possible upper bound on the sum of the $p^{\text{th}}$ powers of absolute prediction errors for any number of trials guaranteed by any learner. We show that for any $\delta, \epsilon \in (0, 1)$, $\operatorname{opt}_{1+\delta} (\mathcal{F}_{1+\epsilon}) = O(\min(\delta, \epsilon)^{-1})$. Combined with the previous results of Kimber and Long (1995) and Geneson and Zhou (2023), we achieve a complete characterization of the values of $p, q \ge 1$ that result in $\operatorname{opt}_p(\mathcal{F}_q)$ being finite, a problem open for nearly 30 years. We study the learning scenarios of smooth functions that also belong to certain special families of functions, such as polynomials. We prove a conjecture by Geneson and Zhou (2023) that it is not any easier to learn a polynomial in $\mathcal{F}_q$ than it is to learn any general function in $\mathcal{F}_q$. We also define a noisy model for the online learning of smooth functions, where the learner may receive incorrect feedback up to $\eta \ge 1$ times, denoting the worst-case error bound as $\operatorname{opt}^{\text{nf}}_{p, \eta} (\mathcal{F}_q)$. We prove that $\operatorname{opt}^{\text{nf}}_{p, \eta} (\mathcal{F}_q)$ is finite if and only if $\operatorname{opt}_p(\mathcal{F}_q)$ is. Moreover, we prove for all $p, q \ge 2$ and $\eta \ge 1$ that $\operatorname{opt}^{\text{nf}}_{p, \eta} (\mathcal{F}_q) = \Theta (\eta)$.
Related papers
- PREM: Privately Answering Statistical Queries with Relative Error [91.98332694700046]
We introduce $mathsfPREM$ (Private Relative Error Multiplicative weight update), a new framework for generating synthetic data that a relative error guarantee for statistical queries under $(varepsilon, delta)$ differential privacy (DP)<n>We complement our algorithm with nearly matching lower bounds.
arXiv Detail & Related papers (2025-02-20T18:32:02Z) - Quantum Sabotage Complexity [0.7812210699650152]
We show $mathsfQ(f_mathsfsab)$, the quantum query complexity of $f_mathsfsab$.
We show that when $f$ is the Indexing function, $mathsfQ(f_mathsfsab)=Theta(sqrtmathsfsab)$, ruling out the possibility that $mathsfQ(f_mathsfsab)=Theta(sqrtmathsf
arXiv Detail & Related papers (2024-08-22T17:57:58Z) - Quantum and classical query complexities of functions of matrices [0.0]
We show that for any continuous function $f(x):[-1,1]rightarrow [-1,1]$, the quantum query complexity of computing $brai f(A) ketjpm varepsilon/4$ is lower bounded by $Omega(widetildedeg_varepsilon(f))$.
arXiv Detail & Related papers (2023-11-13T00:45:41Z) - Online Learning of Smooth Functions [0.35534933448684125]
We study the online learning of real-valued functions where the hidden function is known to have certain smoothness properties.
We find new bounds for $textopt_p(mathcal F_q)$ that are sharp up to a constant factor.
In the multi-variable setup, we establish inequalities relating $textopt_p(mathcal F_q,d)$ to $textopt_p(mathcal F_q,d)$ and show that $textopt_p(mathcal F
arXiv Detail & Related papers (2023-01-04T04:05:58Z) - Cryptographic Hardness of Learning Halfspaces with Massart Noise [59.8587499110224]
We study the complexity of PAC learning halfspaces in the presence of Massart noise.
We show that no-time Massart halfspace learners can achieve error better than $Omega(eta)$, even if the optimal 0-1 error is small.
arXiv Detail & Related papers (2022-07-28T17:50:53Z) - Low-degree learning and the metric entropy of polynomials [44.99833362998488]
We prove that any (deterministic or randomized) algorithm which learns $mathscrF_nd$ with $L$-accuracy $varepsilon$ requires at least $Omega(sqrtvarepsilon)2dlog n leq log mathsfM(mathscrF_n,d,|cdot|_L,varepsilon) satisfies the two-sided estimate $$c (1-varepsilon)2dlog
arXiv Detail & Related papers (2022-03-17T23:52:08Z) - Learning low-degree functions from a logarithmic number of random
queries [77.34726150561087]
We prove that for any integer $ninmathbbN$, $din1,ldots,n$ and any $varepsilon,deltain(0,1)$, a bounded function $f:-1,1nto[-1,1]$ of degree at most $d$ can be learned.
arXiv Detail & Related papers (2021-09-21T13:19:04Z) - Sharper bounds for online learning of smooth functions of a single
variable [0.0]
We show that $opt_1+epsilon(mathcalF_q) = Theta(epsilon-frac12)$, where the constants in the bound do not depend on $q$.
We also show that $opt_1+epsilon(mathcalF_q) = Theta(epsilon-frac12)$, where the constants in the bound do not depend on $q$.
arXiv Detail & Related papers (2021-05-30T23:06:21Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - An Optimal Separation of Randomized and Quantum Query Complexity [67.19751155411075]
We prove that for every decision tree, the absolute values of the Fourier coefficients of a given order $ellsqrtbinomdell (1+log n)ell-1,$ sum to at most $cellsqrtbinomdell (1+log n)ell-1,$ where $n$ is the number of variables, $d$ is the tree depth, and $c>0$ is an absolute constant.
arXiv Detail & Related papers (2020-08-24T06:50:57Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.