Is Transductive Learning Equivalent to PAC Learning?
- URL: http://arxiv.org/abs/2405.05190v1
- Date: Wed, 8 May 2024 16:26:49 GMT
- Title: Is Transductive Learning Equivalent to PAC Learning?
- Authors: Shaddin Dughmi, Yusuf Kalayci, Grayson York,
- Abstract summary: We show that transductive and PAC learning are essentially equivalent for supervised learning with pseudometric losses in the realizable setting.
Our results imply that transductive and PAC learning are essentially equivalent for supervised learning with pseudometric losses in the realizable setting.
- Score: 0.9012198585960443
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Most work in the area of learning theory has focused on designing effective Probably Approximately Correct (PAC) learners. Recently, other models of learning such as transductive error have seen more scrutiny. We move toward showing that these problems are equivalent by reducing agnostic learning with a PAC guarantee to agnostic learning with a transductive guarantee by adding a small number of samples to the dataset. We first rederive the result of Aden-Ali et al. arXiv:2304.09167 reducing PAC learning to transductive learning in the realizable setting using simpler techniques and at more generality as background for our main positive result. Our agnostic transductive to PAC conversion technique extends the aforementioned argument to the agnostic case, showing that an agnostic transductive learner can be efficiently converted to an agnostic PAC learner. Finally, we characterize the performance of the agnostic one inclusion graph algorithm of Asilis et al. arXiv:2309.13692 for binary classification, and show that plugging it into our reduction leads to an agnostic PAC learner that is essentially optimal. Our results imply that transductive and PAC learning are essentially equivalent for supervised learning with pseudometric losses in the realizable setting, and for binary classification in the agnostic setting. We conjecture this is true more generally for the agnostic setting.
Related papers
- Proper Learnability and the Role of Unlabeled Data [10.168670899305232]
We show that there are problems whose proper learnability is logically undecidable, i.e., independent of the ZFC axioms.<n>We then show all impossibility results which obstruct any characterization of proper learnability in the realizable PAC model.
arXiv Detail & Related papers (2025-02-14T18:41:53Z) - Error Exponent in Agnostic PAC Learning [4.772817128620037]
Probably Approximately Correct (PAC) is widely used to analyze learning problems and algorithms.
In this paper, we consider PAC learning using the error exponent - a well established analysis method in Information Theory.
We find, under some assumptions, an improved distribution dependent error exponent for a wide range of problems, establishing the exponential behavior of the PAC error probability in learning.
arXiv Detail & Related papers (2024-05-01T18:08:03Z) - Transductive Learning Is Compact [10.168670899305232]
We show a compactness result holding broadly across supervised learning with a general class of loss functions.
For realizable learning with improper metric losses, we show that exact compactness of sample complexity can fail.
We conjecture that larger gaps are possible for the agnostic case.
arXiv Detail & Related papers (2024-02-15T23:10:45Z) - On Learning Latent Models with Multi-Instance Weak Supervision [57.18649648182171]
We consider a weakly supervised learning scenario where the supervision signal is generated by a transition function $sigma$ labels associated with multiple input instances.
Our problem is met in different fields, including latent structural learning and neuro-symbolic integration.
arXiv Detail & Related papers (2023-06-23T22:05:08Z) - Learnability, Sample Complexity, and Hypothesis Class Complexity for
Regression Models [10.66048003460524]
This work is inspired by the foundation of PAC and is motivated by the existing regression learning issues.
The proposed approach, denoted by epsilon-Confidence Approximately Correct (epsilon CoAC), utilizes Kullback Leibler divergence (relative entropy)
It enables the learner to compare hypothesis classes of different complexity orders and choose among them the optimum with the minimum epsilon.
arXiv Detail & Related papers (2023-03-28T15:59:12Z) - Agnostic PAC Learning of k-juntas Using L2-Polynomial Regression [9.732863739456036]
We present a new PAC learning algorithm based on the Fourier expansion with lower computational complexity.
We prove our results by connecting the PAC learning with 0-1 loss to the minimum mean square estimation problem.
We derive an elegant upper bound on the 0-1 loss in terms of the MMSE error and show that the sign of the MMSE is a PAC learner for any concept class containing it.
arXiv Detail & Related papers (2023-03-08T19:54:07Z) - Instance-Dependent Label-Noise Learning with Manifold-Regularized
Transition Matrix Estimation [172.81824511381984]
The transition matrix T(x) is unidentifiable under the instance-dependent noise(IDN)
We propose assumption on the geometry of T(x) that "the closer two instances are, the more similar their corresponding transition matrices should be"
Our method is superior to state-of-the-art approaches for label-noise learning under the challenging IDN.
arXiv Detail & Related papers (2022-06-06T04:12:01Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Agnostic Proper Learning of Halfspaces under Gaussian Marginals [56.01192577666607]
We study the problem of agnostically learning halfspaces under the Gaussian.
Our main result is the em first proper learning algorithm for this problem.
arXiv Detail & Related papers (2021-02-10T18:40:44Z) - The Optimality of Polynomial Regression for Agnostic Learning under
Gaussian Marginals [47.81107898315438]
We develop a method for finding hard families of examples for a wide class of problems by using duality LP.
We show that the $L1$-regression is essentially best possible, and therefore that the computational difficulty of learning a concept class is closely related to the degree required to approximate any function from the class in $L1$-norm.
arXiv Detail & Related papers (2021-02-08T18:06:32Z) - The Complexity of Adversarially Robust Proper Learning of Halfspaces
with Agnostic Noise [67.27523616312428]
We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent PAC model.
We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem.
arXiv Detail & Related papers (2020-07-30T04:18:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.