Adversarially Robust Learning: A Generic Minimax Optimal Learner and
Characterization
- URL: http://arxiv.org/abs/2209.07369v1
- Date: Thu, 15 Sep 2022 15:32:42 GMT
- Title: Adversarially Robust Learning: A Generic Minimax Optimal Learner and
Characterization
- Authors: Omar Montasser, Steve Hanneke, Nathan Srebro
- Abstract summary: We present a minimax optimal learner for the problem of learning predictors robust to adversarial examples at test-time.
In particular, we show, in a strong negative sense, the suboptimality of the robust learner proposed by Montasser, Hanneke, and Srebro.
- Score: 39.51923275855131
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a minimax optimal learner for the problem of learning predictors
robust to adversarial examples at test-time. Interestingly, we find that this
requires new algorithmic ideas and approaches to adversarially robust learning.
In particular, we show, in a strong negative sense, the suboptimality of the
robust learner proposed by Montasser, Hanneke, and Srebro (2019) and a broader
family of learners we identify as local learners. Our results are enabled by
adopting a global perspective, specifically, through a key technical
contribution: the global one-inclusion graph, which may be of independent
interest, that generalizes the classical one-inclusion graph due to Haussler,
Littlestone, and Warmuth (1994). Finally, as a byproduct, we identify a
dimension characterizing qualitatively and quantitatively what classes of
predictors $\mathcal{H}$ are robustly learnable. This resolves an open problem
due to Montasser et al. (2019), and closes a (potentially) infinite gap between
the established upper and lower bounds on the sample complexity of
adversarially robust learning.
Related papers
- Achieving More with Less: A Tensor-Optimization-Powered Ensemble Method [53.170053108447455]
Ensemble learning is a method that leverages weak learners to produce a strong learner.
We design a smooth and convex objective function that leverages the concept of margin, making the strong learner more discriminative.
We then compare our algorithm with random forests of ten times the size and other classical methods across numerous datasets.
arXiv Detail & Related papers (2024-08-06T03:42:38Z) - Learnability Gaps of Strategic Classification [68.726857356532]
We focus on addressing a fundamental question: the learnability gaps between strategic classification and standard learning.
We provide nearly tight sample complexity and regret bounds, offering significant improvements over prior results.
Notably, our algorithm in this setting is of independent interest and can be applied to other problems such as multi-label learning.
arXiv Detail & Related papers (2024-02-29T16:09:19Z) - Regularization and Optimal Multiclass Learning [10.168670899305232]
This work is to characterize the role of regularization in perhaps the simplest setting for which empirical risk minimization fails: multiclass learning with arbitrary label sets.
Using one-inclusion graphs (OIGs), we exhibit optimal learning algorithms that dovetail with tried-and-true algorithmic principles.
arXiv Detail & Related papers (2023-09-24T16:49:55Z) - Agnostic Multi-Robust Learning Using ERM [19.313739782029185]
A fundamental problem in robust learning is asymmetry: a learner needs to correctly classify every one of exponentially-many perturbations that an adversary might make to a test-time natural example.
In contrast, the attacker only needs to find one successful perturbation.
We introduce a novel multi-group setting and introduce a novel multi-robust learning problem.
arXiv Detail & Related papers (2023-03-15T21:30:14Z) - A Moment-Matching Approach to Testable Learning and a New
Characterization of Rademacher Complexity [15.746613321517282]
We give a powerful new approach for developing algorithms for testable learning using tools from moment matching and metric agnostic in probability.
Surprisingly, we show that the information-theoretic complexity of testable learning is tightly characterized by the Rademacher complexity of the concept class.
arXiv Detail & Related papers (2022-11-23T21:29:51Z) - Improved Robust Algorithms for Learning with Discriminative Feature
Feedback [21.58493386054356]
Discriminative Feature Feedback is a protocol for interactive learning based on feature explanations that are provided by a human teacher.
We provide new robust interactive learning algorithms for the Discriminative Feature Feedback model.
arXiv Detail & Related papers (2022-09-08T12:11:12Z) - Learning Rates for Nonconvex Pairwise Learning [7.33244617309908]
Pairwise learning is receiving increasing attention since it improve many important learning tasks based on the size of the population.
Motivated nonwise learning provides improved learning rates.
arXiv Detail & Related papers (2021-11-09T16:12:20Z) - Outlier-Robust Learning of Ising Models Under Dobrushin's Condition [57.89518300699042]
We study the problem of learning Ising models satisfying Dobrushin's condition in the outlier-robust setting where a constant fraction of the samples are adversarially corrupted.
Our main result is to provide the first computationally efficient robust learning algorithm for this problem with near-optimal error guarantees.
arXiv Detail & Related papers (2021-02-03T18:00:57Z) - Instance-Dependent Complexity of Contextual Bandits and Reinforcement
Learning: A Disagreement-Based Perspective [104.67295710363679]
In the classical multi-armed bandit problem, instance-dependent algorithms attain improved performance on "easy" problems with a gap between the best and second-best arm.
We introduce a family of complexity measures that are both sufficient and necessary to obtain instance-dependent regret bounds.
We then introduce new oracle-efficient algorithms which adapt to the gap whenever possible, while also attaining the minimax rate in the worst case.
arXiv Detail & Related papers (2020-10-07T01:33:06Z) - Provably Efficient Exploration for Reinforcement Learning Using
Unsupervised Learning [96.78504087416654]
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems, we investigate when this paradigm is provably efficient.
We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a noregret tabular RL algorithm.
arXiv Detail & Related papers (2020-03-15T19:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.