Towards Efficient Contrastive PAC Learning
- URL: http://arxiv.org/abs/2502.15962v1
- Date: Fri, 21 Feb 2025 21:51:01 GMT
- Title: Towards Efficient Contrastive PAC Learning
- Authors: Jie Shen,
- Abstract summary: We study contrastive learning under the PAC learning framework.<n>In this paper, we consider contrastive learning of the fundamental concept of linear representations.<n>We establish guarantees based on Rademacher complexity, and connect it to PAC guarantees under certain contrastive large-margin conditions.
- Score: 6.209600119671225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study contrastive learning under the PAC learning framework. While a series of recent works have shown statistical results for learning under contrastive loss, based either on the VC-dimension or Rademacher complexity, their algorithms are inherently inefficient or not implying PAC guarantees. In this paper, we consider contrastive learning of the fundamental concept of linear representations. Surprisingly, even under such basic setting, the existence of efficient PAC learners is largely open. We first show that the problem of contrastive PAC learning of linear representations is intractable to solve in general. We then show that it can be relaxed to a semi-definite program when the distance between contrastive samples is measured by the $\ell_2$-norm. We then establish generalization guarantees based on Rademacher complexity, and connect it to PAC guarantees under certain contrastive large-margin conditions. To the best of our knowledge, this is the first efficient PAC learning algorithm for contrastive learning.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Multi-View Majority Vote Learning Algorithms: Direct Minimization of PAC-Bayesian Bounds [0.8039067099377079]
We extend PAC-Bayesian theory to multi-view learning, introducing novel generalization bounds based on R'enyi divergence.<n>These bounds provide an alternative to traditional Kullback-Leibler divergence-based counterparts, leveraging the flexibility of R'enyi divergence.<n>We also propose first- and second-order oracle PAC-Bayesian bounds and extend the C-bound to multi-view settings.
arXiv Detail & Related papers (2024-11-09T20:25:47Z) - Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation [4.239829789304117]
We use the PAC-Bayesian theory for the setting of learning-to-optimize.
We present the first framework to learn optimization algorithms with provable generalization guarantees.
Our learned algorithms provably outperform related ones derived from a (deterministic) worst-case analysis.
arXiv Detail & Related papers (2024-04-04T08:24:57Z) - PAC Verification of Statistical Algorithms [0.10878040851637999]
We develop the setting of PAC verification, where a hypothesis (machine learning model) that purportedly satisfies the agnostic PAC learning objective is verified using an interactive proof.
We present a protocol for PAC verification of unions of intervals over $mathbbR$ that improves upon their proposed protocol for that task, and matches our lower bound's dependence on $d$.
Our final result is a protocol for the verification of statistical algorithms that satisfy a constraint on their queries.
arXiv Detail & Related papers (2022-11-28T23:57:27Z) - PAC-Bayesian Learning of Optimization Algorithms [6.624726878647541]
We apply the PAC-Bayes theory to the setting of learning-to-optimize.
We learn optimization algorithms with provable generalization guarantees (PAC-bounds) and explicit trade-off between a high probability of convergence and a high convergence speed.
Our results rely on PAC-Bayes bounds for general, unbounded loss-functions based on exponential families.
arXiv Detail & Related papers (2022-10-20T09:16:36Z) - Uniform-PAC Bounds for Reinforcement Learning with Linear Function
Approximation [92.3161051419884]
We study reinforcement learning with linear function approximation.
Existing algorithms only have high-probability regret and/or Probably Approximately Correct (PAC) sample complexity guarantees.
We propose a new algorithm called FLUTE, which enjoys uniform-PAC convergence to the optimal policy with high probability.
arXiv Detail & Related papers (2021-06-22T08:48:56Z) - Self-Bounding Majority Vote Learning Algorithms by the Direct
Minimization of a Tight PAC-Bayesian C-Bound [0.0]
We derive self-bounding majority vote learning algorithms based on PAC-Bayesian guarantees on the C-Bound.
Our algorithms are scalable and lead to accurate predictors paired with non-vacuous guarantees.
arXiv Detail & Related papers (2021-04-28T08:23:18Z) - Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality [131.45028999325797]
We develop a doubly robust off-policy AC (DR-Off-PAC) for discounted MDP.
DR-Off-PAC adopts a single timescale structure, in which both actor and critics are updated simultaneously with constant stepsize.
We study the finite-time convergence rate and characterize the sample complexity for DR-Off-PAC to attain an $epsilon$-accurate optimal policy.
arXiv Detail & Related papers (2021-02-23T18:56:13Z) - Provably Efficient Reward-Agnostic Navigation with Linear Value
Iteration [143.43658264904863]
We show how iteration under a more standard notion of low inherent Bellman error, typically employed in least-square value-style algorithms, can provide strong PAC guarantees on learning a near optimal value function.
We present a computationally tractable algorithm for the reward-free setting and show how it can be used to learn a near optimal policy for any (linear) reward function.
arXiv Detail & Related papers (2020-08-18T04:34:21Z) - A Limitation of the PAC-Bayes Framework [32.24251308425503]
We present a limitation for the PAC-Bayes framework.
We demonstrate an easy learning task that is not amenable to a PAC-Bayes analysis.
arXiv Detail & Related papers (2020-06-24T06:36:00Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z) - PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees [77.67258935234403]
We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning.
We develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization.
arXiv Detail & Related papers (2020-02-13T15:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.