Testing distributional assumptions of learning algorithms
- URL: http://arxiv.org/abs/2204.07196v1
- Date: Thu, 14 Apr 2022 19:10:53 GMT
- Title: Testing distributional assumptions of learning algorithms
- Authors: Ronitt Rubinfeld and Arsen Vasilyan
- Abstract summary: We study the design of tester-learner pairs $(mathcalA,mathcalT)$.
We show that if the distribution on examples in the data passes the tester $mathcalT$ then one can safely trust the output of the agnostic $mathcalA$ on the data.
- Score: 5.204779946147061
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are many important high dimensional function classes that have fast
agnostic learning algorithms when strong assumptions on the distribution of
examples can be made, such as Gaussianity or uniformity over the domain. But
how can one be sufficiently confident that the data indeed satisfies the
distributional assumption, so that one can trust in the output quality of the
agnostic learning algorithm? We propose a model by which to systematically
study the design of tester-learner pairs $(\mathcal{A},\mathcal{T})$, such that
if the distribution on examples in the data passes the tester $\mathcal{T}$
then one can safely trust the output of the agnostic learner $\mathcal{A}$ on
the data.
To demonstrate the power of the model, we apply it to the classical problem
of agnostically learning halfspaces under the standard Gaussian distribution
and present a tester-learner pair with a combined run-time of
$n^{\tilde{O}(1/\epsilon^4)}$. This qualitatively matches that of the best
known ordinary agnostic learning algorithms for this task. In contrast, finite
sample Gaussian distribution testers do not exist for the $L_1$ and EMD
distance measures. A key step in the analysis is a novel characterization of
concentration and anti-concentration properties of a distribution whose
low-degree moments approximately match those of a Gaussian. We also use tools
from polynomial approximation theory.
In contrast, we show strong lower bounds on the combined run-times of
tester-learner pairs for the problems of agnostically learning convex sets
under the Gaussian distribution and for monotone Boolean functions under the
uniform distribution over $\{0,1\}^n$. Through these lower bounds we exhibit
natural problems where there is a dramatic gap between standard agnostic
learning run-time and the run-time of the best tester-learner pair.
Related papers
- Agnostically Learning Multi-index Models with Queries [54.290489524576756]
We study the power of query access for the task of agnostic learning under the Gaussian distribution.
We show that query access gives significant runtime improvements over random examples for agnostically learning MIMs.
arXiv Detail & Related papers (2023-12-27T15:50:47Z) - Testable Learning with Distribution Shift [9.036777309376697]
We define a new model called testable learning with distribution shift.
We obtain provably efficient algorithms for certifying the performance of a classifier on a test distribution.
We give several positive results for learning concept classes such as halfspaces, intersections of halfspaces, and decision trees.
arXiv Detail & Related papers (2023-11-25T23:57:45Z) - An Efficient Tester-Learner for Halfspaces [13.13131953359806]
We give the first efficient algorithm for learning halfspaces in the testable learning model recently defined by Rubinfeld and Vasilyan (2023)
In this model, a learner certifies that the accuracy of its output hypothesis is near optimal whenever the training set passes a test.
arXiv Detail & Related papers (2023-02-28T18:51:55Z) - An Improved Analysis of Gradient Tracking for Decentralized Machine
Learning [34.144764431505486]
We consider decentralized machine learning over a network where the training data is distributed across $n$ agents.
The agent's common goal is to find a model that minimizes the average of all local loss functions.
We improve the dependency on $p$ from $mathcalO(p-1)$ to $mathcalO(p-1)$ in the noiseless case.
arXiv Detail & Related papers (2022-02-08T12:58:14Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Optimal Testing of Discrete Distributions with High Probability [49.19942805582874]
We study the problem of testing discrete distributions with a focus on the high probability regime.
We provide the first algorithms for closeness and independence testing that are sample-optimal, within constant factors.
arXiv Detail & Related papers (2020-09-14T16:09:17Z) - Downsampling for Testing and Learning in Product Distributions [24.48103093661132]
We study distribution-free property testing and learning problems where the unknown probability distribution is a product distribution over $mathbbRd$
For many important classes of functions, such as intersections of halfspaces, threshold functions, convex sets, and $k$-alternating functions, known algorithms either have complexity that depends on the support size of the distribution.
We introduce a general method, which we call downlog, that resolves these issues.
arXiv Detail & Related papers (2020-07-15T02:46:44Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Learning Halfspaces with Tsybakov Noise [50.659479930171585]
We study the learnability of halfspaces in the presence of Tsybakov noise.
We give an algorithm that achieves misclassification error $epsilon$ with respect to the true halfspace.
arXiv Detail & Related papers (2020-06-11T14:25:02Z) - Robustly Learning any Clusterable Mixture of Gaussians [55.41573600814391]
We study the efficient learnability of high-dimensional Gaussian mixtures in the adversarial-robust setting.
We provide an algorithm that learns the components of an $epsilon$-corrupted $k$-mixture within information theoretically near-optimal error proofs of $tildeO(epsilon)$.
Our main technical contribution is a new robust identifiability proof clusters from a Gaussian mixture, which can be captured by the constant-degree Sum of Squares proof system.
arXiv Detail & Related papers (2020-05-13T16:44:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.