List and Certificate Complexities in Replicable Learning
- URL: http://arxiv.org/abs/2304.02240v1
- Date: Wed, 5 Apr 2023 06:05:27 GMT
- Title: List and Certificate Complexities in Replicable Learning
- Authors: Peter Dixon, A. Pavan, Jason Vander Woude, N. V. Vinodchandran
- Abstract summary: We consider two feasible notions of replicability called list replicability and certificate replicability.
We design algorithms for certain learning problems that are optimal in list and certificate complexity.
- Score: 0.7829352305480285
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate replicable learning algorithms. Ideally, we would like to
design algorithms that output the same canonical model over multiple runs, even
when different runs observe a different set of samples from the unknown data
distribution. In general, such a strong notion of replicability is not
achievable. Thus we consider two feasible notions of replicability called list
replicability and certificate replicability. Intuitively, these notions capture
the degree of (non) replicability. We design algorithms for certain learning
problems that are optimal in list and certificate complexity. We establish
matching impossibility results.
Related papers
- On the Computational Landscape of Replicable Learning [40.274579987732416]
We study computational aspects of algorithmic replicability, a notion of stability introduced by Impagliazzo, Lei, Pitassi, and Sorrell.
Motivated by a recent line of work that established strong statistical connections between replicability and other notions of learnability, we aim to understand better the computational connections between replicability and these learning paradigms.
arXiv Detail & Related papers (2024-05-24T14:30:40Z) - Match me if you can: Semi-Supervised Semantic Correspondence Learning with Unpaired Images [76.47980643420375]
This paper builds on the hypothesis that there is an inherent data-hungry matter in learning semantic correspondences.
We demonstrate a simple machine annotator reliably enriches paired key points via machine supervision.
Our models surpass current state-of-the-art models on semantic correspondence learning benchmarks like SPair-71k, PF-PASCAL, and PF-WILLOW.
arXiv Detail & Related papers (2023-11-30T13:22:15Z) - Tractable Bounding of Counterfactual Queries by Knowledge Compilation [51.47174989680976]
We discuss the problem of bounding partially identifiable queries, such as counterfactuals, in Pearlian structural causal models.
A recently proposed iterated EM scheme yields an inner approximation of those bounds by sampling the initialisation parameters.
We show how a single symbolic knowledge compilation allows us to obtain the circuit structure with symbolic parameters to be replaced by their actual values.
arXiv Detail & Related papers (2023-10-05T07:10:40Z) - Multiclass Boosting: Simple and Intuitive Weak Learning Criteria [72.71096438538254]
We give a simple and efficient boosting algorithm, that does not require realizability assumptions.
We present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning.
arXiv Detail & Related papers (2023-07-02T19:26:58Z) - Replicable Reinforcement Learning [15.857503103543308]
We provide a provably replicable algorithm for parallel value iteration, and a provably replicable version of R-max in the episodic setting.
These are the first formal replicability results for control problems, which present different challenges for replication than batch learning settings.
arXiv Detail & Related papers (2023-05-24T16:05:15Z) - Replicability and stability in learning [16.936594801109557]
Impagliazzo, Lei, Pitassi and Sorrell (22) recently initiated the study of replicability in machine learning.
We show how to boost any replicable algorithm so that it produces the same output with probability arbitrarily close to 1.
We prove that list replicability can be boosted so that it is achieved with probability arbitrarily close to 1.
arXiv Detail & Related papers (2023-04-07T17:52:26Z) - End-to-end Algorithm Synthesis with Recurrent Networks: Logical
Extrapolation Without Overthinking [52.05847268235338]
We show how machine learning systems can perform logical extrapolation without overthinking problems.
We propose a recall architecture that keeps an explicit copy of the problem instance in memory so that it cannot be forgotten.
We also employ a progressive training routine that prevents the model from learning behaviors that are specific to number and instead pushes it to learn behaviors that can be repeated indefinitely.
arXiv Detail & Related papers (2022-02-11T18:43:28Z) - From Undecidability of Non-Triviality and Finiteness to Undecidability
of Learnability [0.0]
We show that there is no general-purpose procedure for rigorously evaluating whether newly proposed models indeed successfully learn from data.
For PAC binary classification, uniform and universal online learning, and exact learning through teacher-learner interactions, learnability is in general undecidable.
There is no one-size-fits-all algorithm for deciding whether a machine learning model can be successful.
arXiv Detail & Related papers (2021-06-02T18:00:04Z) - Finding Geometric Models by Clustering in the Consensus Space [61.65661010039768]
We propose a new algorithm for finding an unknown number of geometric models, e.g., homographies.
We present a number of applications where the use of multiple geometric models improves accuracy.
These include pose estimation from multiple generalized homographies; trajectory estimation of fast-moving objects.
arXiv Detail & Related papers (2021-03-25T14:35:07Z) - Theory and Algorithms for Shapelet-based Multiple-Instance Learning [5.08418565337126]
We propose a new formulation of Multiple-Instance Learning (MIL), in which a unit of data consists of instances called a bag.
The goal is to find a good classifier of bags based on the similarity with a "shapelet" (or pattern)
In our formulation, we use all possible, and thus infinitely many shapelets, resulting in a richer class of classifiers.
arXiv Detail & Related papers (2020-05-31T17:10:59Z) - Consistency of a Recurrent Language Model With Respect to Incomplete
Decoding [67.54760086239514]
We study the issue of receiving infinite-length sequences from a recurrent language model.
We propose two remedies which address inconsistency: consistent variants of top-k and nucleus sampling, and a self-terminating recurrent language model.
arXiv Detail & Related papers (2020-02-06T19:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.