A General Framework for Consistent Structured Prediction with Implicit
Loss Embeddings
- URL: http://arxiv.org/abs/2002.05424v1
- Date: Thu, 13 Feb 2020 10:30:04 GMT
- Title: A General Framework for Consistent Structured Prediction with Implicit
Loss Embeddings
- Authors: Carlo Ciliberto, Lorenzo Rosasco, Alessandro Rudi
- Abstract summary: We propose and analyze a novel theoretical and algorithmic framework for structured prediction.
We study a large class of loss functions that implicitly defines a suitable geometry on the problem.
When dealing with output spaces with infinite cardinality, a suitable implicit formulation of the estimator is shown to be crucial.
- Score: 113.15416137912399
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose and analyze a novel theoretical and algorithmic framework for
structured prediction. While so far the term has referred to discrete output
spaces, here we consider more general settings, such as manifolds or spaces of
probability measures. We define structured prediction as a problem where the
output space lacks a vectorial structure. We identify and study a large class
of loss functions that implicitly defines a suitable geometry on the problem.
The latter is the key to develop an algorithmic framework amenable to a sharp
statistical analysis and yielding efficient computations. When dealing with
output spaces with infinite cardinality, a suitable implicit formulation of the
estimator is shown to be crucial.
Related papers
- On Probabilistic Embeddings in Optimal Dimension Reduction [1.2085509610251701]
Dimension reduction algorithms are a crucial part of many data science pipelines.
Despite their wide utilization, many non-linear dimension reduction algorithms are poorly understood from a theoretical perspective.
arXiv Detail & Related papers (2024-08-05T12:46:21Z) - Structured Prediction in Online Learning [66.36004256710824]
We study a theoretical and algorithmic framework for structured prediction in the online learning setting.
We show that our algorithm is a generalisation of optimal algorithms from the supervised learning setting.
We consider a second algorithm designed especially for non-stationary data distributions, including adversarial data.
arXiv Detail & Related papers (2024-06-18T07:45:02Z) - Consciousness-Inspired Spatio-Temporal Abstractions for Better Generalization in Reinforcement Learning [83.41487567765871]
Skipper is a model-based reinforcement learning framework.
It automatically generalizes the task given into smaller, more manageable subtasks.
It enables sparse decision-making and focused abstractions on the relevant parts of the environment.
arXiv Detail & Related papers (2023-09-30T02:25:18Z) - On Certified Generalization in Structured Prediction [1.0152838128195467]
In structured prediction, target objects have rich internal structure which does not factorize into independent components.
We present a novel PAC-Bayesian risk bound for structured prediction wherein the rate of generalization scales not only with the number of structured examples but also with their size.
arXiv Detail & Related papers (2023-06-15T13:15:26Z) - Neuro-Symbolic Entropy Regularization [78.16196949641079]
In structured prediction, the goal is to jointly predict many output variables that together encode a structured object.
One approach -- entropy regularization -- posits that decision boundaries should lie in low-probability regions.
We propose a loss, neuro-symbolic entropy regularization, that encourages the model to confidently predict a valid object.
arXiv Detail & Related papers (2022-01-25T06:23:10Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Learning Output Embeddings in Structured Prediction [73.99064151691597]
A powerful and flexible approach to structured prediction consists in embedding the structured objects to be predicted into a feature space of possibly infinite dimension.
A prediction in the original space is computed by solving a pre-image problem.
In this work, we propose to jointly learn a finite approximation of the output embedding and the regression function into the new feature space.
arXiv Detail & Related papers (2020-07-29T09:32:53Z) - Computing Large-Scale Matrix and Tensor Decomposition with Structured
Factors: A Unified Nonconvex Optimization Perspective [33.19643734230432]
This article aims at offering a comprehensive tutorial for the computational aspects of structured matrix and tensor factorization.
We start with general optimization theory that covers a wide range of factorization problems with diverse constraints.
Then, we go under the hood' to showcase specific algorithm design under these introduced principles.
arXiv Detail & Related papers (2020-06-15T07:19:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.