Transductive Learning Is Compact
- URL: http://arxiv.org/abs/2402.10360v3
- Date: Wed, 30 Oct 2024 06:33:33 GMT
- Title: Transductive Learning Is Compact
- Authors: Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, Shang-Hua Teng,
- Abstract summary: We show a compactness result holding broadly across supervised learning with a general class of loss functions.
For realizable learning with improper metric losses, we show that exact compactness of sample complexity can fail.
We conjecture that larger gaps are possible for the agnostic case.
- Score: 10.168670899305232
- License:
- Abstract: We demonstrate a compactness result holding broadly across supervised learning with a general class of loss functions: Any hypothesis class $H$ is learnable with transductive sample complexity $m$ precisely when all of its finite projections are learnable with sample complexity $m$. We prove that this exact form of compactness holds for realizable and agnostic learning with respect to any proper metric loss function (e.g., any norm on $\mathbb{R}^d$) and any continuous loss on a compact space (e.g., cross-entropy, squared loss). For realizable learning with improper metric losses, we show that exact compactness of sample complexity can fail, and provide matching upper and lower bounds of a factor of 2 on the extent to which such sample complexities can differ. We conjecture that larger gaps are possible for the agnostic case. Furthermore, invoking the equivalence between sample complexities in the PAC and transductive models (up to lower order factors, in the realizable case) permits us to directly port our results to the PAC model, revealing an almost-exact form of compactness holding broadly in PAC learning.
Related papers
- Is Transductive Learning Equivalent to PAC Learning? [0.9012198585960443]
We show that the PAC and transductive models are essentially equivalent for agnostic binary classification.
We leave as an intriguing open question whether our second result can be extended beyond binary classification to show the transductive and PAC models equivalent more broadly.
arXiv Detail & Related papers (2024-05-08T16:26:49Z) - On Representation Complexity of Model-based and Model-free Reinforcement
Learning [11.843778337443824]
We study the representation complexity of model-based and model-free reinforcement learning (RL) in the context of circuit complexity.
We prove theoretically that there exists a broad class of MDPs such that their underlying transition and reward functions can be represented by constant depth circuits with size.
By drawing attention to the approximation errors and building connections to complexity theory, our theory provides unique insights into why model-based algorithms usually enjoy better sample complexity than model-free algorithms from a novel representation complexity perspective.
arXiv Detail & Related papers (2023-10-03T00:01:58Z) - On Learning Latent Models with Multi-Instance Weak Supervision [57.18649648182171]
We consider a weakly supervised learning scenario where the supervision signal is generated by a transition function $sigma$ labels associated with multiple input instances.
Our problem is met in different fields, including latent structural learning and neuro-symbolic integration.
arXiv Detail & Related papers (2023-06-23T22:05:08Z) - On-Demand Sampling: Learning Optimally from Multiple Distributions [63.20009081099896]
Social and real-world considerations have given rise to multi-distribution learning paradigms.
We establish the optimal sample complexity of these learning paradigms and give algorithms that meet this sample complexity.
Our algorithm design and analysis are enabled by our extensions of online learning techniques for solving zero-sum games.
arXiv Detail & Related papers (2022-10-22T19:07:26Z) - Embed to Control Partially Observed Systems: Representation Learning with Provable Sample Efficiency [105.17746223041954]
Reinforcement learning in partially observed Markov decision processes (POMDPs) faces two challenges.
It often takes the full history to predict the future, which induces a sample complexity that scales exponentially with the horizon.
We propose a reinforcement learning algorithm named Embed to Control (ETC), which learns the representation at two levels while optimizing the policy.
arXiv Detail & Related papers (2022-05-26T16:34:46Z) - Metric Entropy Duality and the Sample Complexity of Outcome
Indistinguishability [7.727052811126007]
In outcome indistinguishability, the goal is to output a predictor that cannot be distinguished from the target predictor.
We show that the sample complexity of outcome indistinguishability is characterized by the metric entropy of $P$ w.r.t.
This equivalence makes an intriguing connection to the long-standing metric entropy duality conjecture in convex geometry.
arXiv Detail & Related papers (2022-03-09T06:02:31Z) - A Characterization of Semi-Supervised Adversarially-Robust PAC Learnability [57.502573663108535]
We study the problem of learning an adversarially robust predictor to test time attacks in the semi-supervised PAC model.
We show that there is a significant benefit in semi-supervised robust learning even in the worst-case distribution-free model.
arXiv Detail & Related papers (2022-02-11T03:01:45Z) - The Sample Complexity of Distribution-Free Parity Learning in the Robust
Shuffle Model [11.821892196198457]
We show that the sample complexity of learning $d$-bit parity functions is $Omega (2d/2)$.
We also sketch a simple shuffle model protocol demonstrating that our results are tight up to $poly(d)$ factors.
arXiv Detail & Related papers (2021-03-29T15:26:02Z) - Enhanced Principal Component Analysis under A Collaborative-Robust
Framework [89.28334359066258]
We introduce a general collaborative-robust weight learning framework that combines weight learning and robust loss in a non-trivial way.
Under the proposed framework, only a part of well-fitting samples are activated which indicates more importance during training, and others, whose errors are large, will not be ignored.
In particular, the negative effects of inactivated samples are alleviated by the robust loss function.
arXiv Detail & Related papers (2021-03-22T15:17:37Z) - The Complexity of Adversarially Robust Proper Learning of Halfspaces
with Agnostic Noise [67.27523616312428]
We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent PAC model.
We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem.
arXiv Detail & Related papers (2020-07-30T04:18:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.