On Learnability under General Stochastic Processes
- URL: http://arxiv.org/abs/2005.07605v3
- Date: Fri, 11 Mar 2022 14:57:01 GMT
- Title: On Learnability under General Stochastic Processes
- Authors: A. Philip Dawid and Ambuj Tewari
- Abstract summary: Statistical learning under general non-iid processes is less mature.
We provide two natural notions of learnability of a function class under a general process.
Our results hold for both binary classification and regression.
- Score: 20.22409095000365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical learning theory under independent and identically distributed
(iid) sampling and online learning theory for worst case individual sequences
are two of the best developed branches of learning theory. Statistical learning
under general non-iid stochastic processes is less mature. We provide two
natural notions of learnability of a function class under a general stochastic
process. We show that both notions are in fact equivalent to online
learnability. Our results hold for both binary classification and regression.
Related papers
- Collaborative Learning with Different Labeling Functions [7.228285747845779]
We study a variant of Collaborative PAC Learning, in which we aim to learn an accurate classifier for each of the $n$ data distributions.
We show that, when the data distributions satisfy a weaker realizability assumption, sample-efficient learning is still feasible.
arXiv Detail & Related papers (2024-02-16T04:32:22Z) - Beyond Expectations: Learning with Stochastic Dominance Made Practical [88.06211893690964]
dominance models risk-averse preferences for decision making with uncertain outcomes.
Despite theoretically appealing, the application of dominance in machine learning has been scarce.
We first generalize the dominance concept to enable feasible comparisons between any arbitrary pair of random variables.
We then develop a simple and efficient approach for finding the optimal solution in terms of dominance.
arXiv Detail & Related papers (2024-02-05T03:21:23Z) - Supervised learning with probabilistic morphisms and kernel mean
embeddings [0.0]
I propose a generative model of supervised learning that unifies two approaches to supervised learning.
I address two measurability problems, which have been ignored in statistical learning theory.
I present a variant of Vapnik-Stefanuyk's regularization method for solving ill-posed problems.
arXiv Detail & Related papers (2023-05-10T17:54:21Z) - Learning versus Refutation in Noninteractive Local Differential Privacy [133.80204506727526]
We study two basic statistical tasks in non-interactive local differential privacy (LDP): learning and refutation.
Our main result is a complete characterization of the sample complexity of PAC learning for non-interactive LDP protocols.
arXiv Detail & Related papers (2022-10-26T03:19:24Z) - Agree to Disagree: Diversity through Disagreement for Better
Transferability [54.308327969778155]
We propose D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data.
We show how D-BAT naturally emerges from the notion of generalized discrepancy.
arXiv Detail & Related papers (2022-02-09T12:03:02Z) - Realizable Learning is All You Need [21.34668631009594]
equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory.
We give the first model-independent framework explaining the equivalence of realizable and agnostic learnability.
arXiv Detail & Related papers (2021-11-08T19:00:00Z) - From Undecidability of Non-Triviality and Finiteness to Undecidability
of Learnability [0.0]
We show that there is no general-purpose procedure for rigorously evaluating whether newly proposed models indeed successfully learn from data.
For PAC binary classification, uniform and universal online learning, and exact learning through teacher-learner interactions, learnability is in general undecidable.
There is no one-size-fits-all algorithm for deciding whether a machine learning model can be successful.
arXiv Detail & Related papers (2021-06-02T18:00:04Z) - Evading the Simplicity Bias: Training a Diverse Set of Models Discovers
Solutions with Superior OOD Generalization [93.8373619657239]
Neural networks trained with SGD were recently shown to rely preferentially on linearly-predictive features.
This simplicity bias can explain their lack of robustness out of distribution (OOD)
We demonstrate that the simplicity bias can be mitigated and OOD generalization improved.
arXiv Detail & Related papers (2021-05-12T12:12:24Z) - Don't Just Blame Over-parametrization for Over-confidence: Theoretical
Analysis of Calibration in Binary Classification [58.03725169462616]
We show theoretically that over-parametrization is not the only reason for over-confidence.
We prove that logistic regression is inherently over-confident, in the realizable, under-parametrized setting.
Perhaps surprisingly, we also show that over-confidence is not always the case.
arXiv Detail & Related papers (2021-02-15T21:38:09Z) - A Theory of Universal Learning [26.51949485387526]
We show that there are only three possible rates of universal learning.
We show that the learning curves of any given concept class decay either at an exponential, or arbitrarily slow rates.
arXiv Detail & Related papers (2020-11-09T15:10:32Z) - On Computation and Generalization of Generative Adversarial Imitation
Learning [134.17122587138897]
Generative Adversarial Learning (GAIL) is a powerful and practical approach for learning sequential decision-making policies.
This paper investigates the theoretical properties of GAIL.
arXiv Detail & Related papers (2020-01-09T00:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.