Distribution Learnability and Robustness
- URL: http://arxiv.org/abs/2406.17814v1
- Date: Tue, 25 Jun 2024 05:09:54 GMT
- Title: Distribution Learnability and Robustness
- Authors: Shai Ben-David, Alex Bie, Gautam Kamath, Tosca Lechner,
- Abstract summary: We show that realizable learnability of a class of probability distributions does not imply its learnability.
We go on to examine what type of data corruption can disrupt the learnability of a distribution class.
- Score: 13.45619583182489
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We examine the relationship between learnability and robust (or agnostic) learnability for the problem of distribution learning. We show that, contrary to other learning settings (e.g., PAC learning of function classes), realizable learnability of a class of probability distributions does not imply its agnostic learnability. We go on to examine what type of data corruption can disrupt the learnability of a distribution class and what is such learnability robust against. We show that realizable learnability of a class of distributions implies its robust learnability with respect to only additive corruption, but not against subtractive corruption. We also explore related implications in the context of compression schemes and differentially private learnability.
Related papers
- Learning Latent Graph Structures and their Uncertainty [63.95971478893842]
Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy.
As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task.
arXiv Detail & Related papers (2024-05-30T10:49:22Z) - Gaussian Mixture Models for Affordance Learning using Bayesian Networks [50.18477618198277]
Affordances are fundamental descriptors of relationships between actions, objects and effects.
This paper approaches the problem of an embodied agent exploring the world and learning these affordances autonomously from its sensory experiences.
arXiv Detail & Related papers (2024-02-08T22:05:45Z) - Can Self-Supervised Representation Learning Methods Withstand
Distribution Shifts and Corruptions? [5.706184197639971]
Self-supervised learning in computer vision aims to leverage the inherent structure and relationships within data to learn meaningful representations.
This work investigates the robustness of learned representations of self-supervised learning approaches focusing on distribution shifts and image corruptions.
arXiv Detail & Related papers (2023-07-31T13:07:56Z) - Impossibility of Characterizing Distribution Learning -- a simple
solution to a long-standing problem [11.39656079889941]
We show that there is no notion of dimension that characterizes the sample complexity of learning distribution classes.
In particular, we show that there is no notion of characterizing dimension (or characterization of learnability) for any of the tasks.
arXiv Detail & Related papers (2023-04-18T03:23:39Z) - Is Self-Supervised Learning More Robust Than Supervised Learning? [29.129681691651637]
Self-supervised contrastive learning is a powerful tool to learn visual representation without labels.
We conduct a series of robustness tests to quantify the behavioral differences between contrastive learning and supervised learning.
Under pre-training corruptions, we find contrastive learning vulnerable to patch shuffling and pixel intensity change, yet less sensitive to dataset-level distribution change.
arXiv Detail & Related papers (2022-06-10T17:58:00Z) - Don't Explain Noise: Robust Counterfactuals for Randomized Ensembles [50.81061839052459]
We formalize the generation of robust counterfactual explanations as a probabilistic problem.
We show the link between the robustness of ensemble models and the robustness of base learners.
Our method achieves high robustness with only a small increase in the distance from counterfactual explanations to their initial observations.
arXiv Detail & Related papers (2022-05-27T17:28:54Z) - Boosting Barely Robust Learners: A New Perspective on Adversarial
Robustness [30.301460075475344]
Barely robust learning algorithms learn predictors that are adversarially robust only on a small fraction.
Our proposed notion of barely robust learning requires perturbation with respect to a "larger" set.
arXiv Detail & Related papers (2022-02-11T22:07:36Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Pairwise Supervision Can Provably Elicit a Decision Boundary [84.58020117487898]
Similarity learning is a problem to elicit useful representations by predicting the relationship between a pair of patterns.
We show that similarity learning is capable of solving binary classification by directly eliciting a decision boundary.
arXiv Detail & Related papers (2020-06-11T05:35:16Z) - On the Sample Complexity of Adversarial Multi-Source PAC Learning [46.24794665486056]
In a single-source setting, an adversary with the power to corrupt a fixed fraction of the training data can prevent PAC-learnability.
We show that, surprisingly, the same is not true in the multi-source setting, where the adversary can arbitrarily corrupt a fixed fraction of the data sources.
Our results also show that in a cooperative learning setting sharing data with other parties has provable benefits, even if some participants are malicious.
arXiv Detail & Related papers (2020-02-24T17:19:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.