A PAC-Bayesian Perspective on Structured Prediction with Implicit Loss
Embeddings
- URL: http://arxiv.org/abs/2012.03780v2
- Date: Mon, 21 Dec 2020 17:20:30 GMT
- Title: A PAC-Bayesian Perspective on Structured Prediction with Implicit Loss
Embeddings
- Authors: Th\'eophile Cantelobre and Benjamin Guedj and Mar\'ia P\'erez-Ortiz
and John Shawe-Taylor
- Abstract summary: PAC-Bayes has gained interest recently for its capacity of producing tight risk bounds for predictor distributions.
We present two generalization bounds, on the risk and excess risk, which yield insights into the behavior of ILE predictors.
Two learning algorithms are derived from these bounds.
- Score: 11.711761707845865
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Many practical machine learning tasks can be framed as Structured prediction
problems, where several output variables are predicted and considered
interdependent. Recent theoretical advances in structured prediction have
focused on obtaining fast rates convergence guarantees, especially in the
Implicit Loss Embedding (ILE) framework. PAC-Bayes has gained interest recently
for its capacity of producing tight risk bounds for predictor distributions.
This work proposes a novel PAC-Bayes perspective on the ILE Structured
prediction framework. We present two generalization bounds, on the risk and
excess risk, which yield insights into the behavior of ILE predictors. Two
learning algorithms are derived from these bounds. The algorithms are
implemented and their behavior analyzed, with source code available at
\url{https://github.com/theophilec/PAC-Bayes-ILE-Structured-Prediction}.
Related papers
- Efficient pooling of predictions via kernel embeddings [0.24578723416255752]
Probabilistic predictions are probability distributions over the set of possible outcomes.
They are typically combined by linearly pooling the individual predictive distributions.
Weights assigned to each prediction can be estimated based on their past performance.
This can be achieved by finding the weights that optimise a proper scoring rule over some training data.
arXiv Detail & Related papers (2024-11-25T10:04:37Z) - Conformal Prediction for Hierarchical Data [5.580128181112309]
We propose a first step towards combining Conformal Prediction and Forecast Reconciliation.
We show that the validity granted by SCP remains while improving the efficiency of the prediction sets.
arXiv Detail & Related papers (2024-11-20T17:26:26Z) - PageRank Bandits for Link Prediction [72.61386754332776]
Link prediction is a critical problem in graph learning with broad applications such as recommender systems and knowledge graph completion.
This paper reformulates link prediction as a sequential decision-making process, where each link prediction interaction occurs sequentially.
We propose a novel fusion algorithm, PRB (PageRank Bandits), which is the first to combine contextual bandits with PageRank for collaborative exploitation and exploration.
arXiv Detail & Related papers (2024-11-03T02:39:28Z) - Paging with Succinct Predictions [25.959849403994202]
We study learning-augmented paging from the new perspective of requiring the least possible amount of predicted information.
We develop algorithms for each of two setups that satisfy all three desirable properties of learning-augmented algorithms.
arXiv Detail & Related papers (2022-10-06T09:26:34Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - User-friendly introduction to PAC-Bayes bounds [0.6599344783327052]
In statistical learning theory there is a set of tools designed to understand the generalization ability of procedures: PAC-Bayesian or PAC-Bayes bounds.
Very recently, PAC-Bayes bounds received a considerable attention: for example there was workshop on PAC-Bayesian trends and insights", organized by B. Guedj, F. Bach and P. Germain.
arXiv Detail & Related papers (2021-10-21T15:50:05Z) - Bayesian decision-making under misspecified priors with applications to
meta-learning [64.38020203019013]
Thompson sampling and other sequential decision-making algorithms are popular approaches to tackle explore/exploit trade-offs in contextual bandits.
We show that performance degrades gracefully with misspecified priors.
arXiv Detail & Related papers (2021-07-03T23:17:26Z) - PAC$^m$-Bayes: Narrowing the Empirical Risk Gap in the Misspecified
Bayesian Regime [75.19403612525811]
This work develops a multi-sample loss which can close the gap by spanning a trade-off between the two risks.
Empirical study demonstrates improvement to the predictive distribution.
arXiv Detail & Related papers (2020-10-19T16:08:34Z) - Learning Output Embeddings in Structured Prediction [73.99064151691597]
A powerful and flexible approach to structured prediction consists in embedding the structured objects to be predicted into a feature space of possibly infinite dimension.
A prediction in the original space is computed by solving a pre-image problem.
In this work, we propose to jointly learn a finite approximation of the output embedding and the regression function into the new feature space.
arXiv Detail & Related papers (2020-07-29T09:32:53Z) - A General Framework for Consistent Structured Prediction with Implicit
Loss Embeddings [113.15416137912399]
We propose and analyze a novel theoretical and algorithmic framework for structured prediction.
We study a large class of loss functions that implicitly defines a suitable geometry on the problem.
When dealing with output spaces with infinite cardinality, a suitable implicit formulation of the estimator is shown to be crucial.
arXiv Detail & Related papers (2020-02-13T10:30:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.