Learning Output Embeddings in Structured Prediction
- URL: http://arxiv.org/abs/2007.14703v3
- Date: Mon, 2 Nov 2020 11:39:29 GMT
- Title: Learning Output Embeddings in Structured Prediction
- Authors: Luc Brogat-Motte, Alessandro Rudi, C\'eline Brouard, Juho Rousu,
Florence d'Alch\'e-Buc
- Abstract summary: A powerful and flexible approach to structured prediction consists in embedding the structured objects to be predicted into a feature space of possibly infinite dimension.
A prediction in the original space is computed by solving a pre-image problem.
In this work, we propose to jointly learn a finite approximation of the output embedding and the regression function into the new feature space.
- Score: 73.99064151691597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A powerful and flexible approach to structured prediction consists in
embedding the structured objects to be predicted into a feature space of
possibly infinite dimension by means of output kernels, and then, solving a
regression problem in this output space. A prediction in the original space is
computed by solving a pre-image problem. In such an approach, the embedding,
linked to the target loss, is defined prior to the learning phase. In this
work, we propose to jointly learn a finite approximation of the output
embedding and the regression function into the new feature space. For that
purpose, we leverage a priori information on the outputs and also unexploited
unsupervised output data, which are both often available in structured
prediction problems. We prove that the resulting structured predictor is a
consistent estimator, and derive an excess risk bound. Moreover, the novel
structured prediction tool enjoys a significantly smaller computational
complexity than former output kernel methods. The approach empirically tested
on various structured prediction problems reveals to be versatile and able to
handle large datasets.
Related papers
- Learning Differentiable Surrogate Losses for Structured Prediction [23.15754467559003]
We introduce a novel framework in which a structured loss function, parameterized by neural networks, is learned directly from output training data.
As a result, the differentiable loss not only enables the learning of neural networks due to the finite dimension of the surrogate space but also allows for the prediction of new structures of the output data.
arXiv Detail & Related papers (2024-11-18T16:07:47Z) - Structured Prediction in Online Learning [66.36004256710824]
We study a theoretical and algorithmic framework for structured prediction in the online learning setting.
We show that our algorithm is a generalisation of optimal algorithms from the supervised learning setting.
We consider a second algorithm designed especially for non-stationary data distributions, including adversarial data.
arXiv Detail & Related papers (2024-06-18T07:45:02Z) - Deep Sketched Output Kernel Regression for Structured Prediction [21.93695380726788]
kernel-induced losses provide a principled way to define structured output prediction tasks.
We tackle the question of how to train neural networks to solve structured output prediction tasks.
arXiv Detail & Related papers (2024-06-13T15:56:55Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Neuro-Symbolic Entropy Regularization [78.16196949641079]
In structured prediction, the goal is to jointly predict many output variables that together encode a structured object.
One approach -- entropy regularization -- posits that decision boundaries should lie in low-probability regions.
We propose a loss, neuro-symbolic entropy regularization, that encourages the model to confidently predict a valid object.
arXiv Detail & Related papers (2022-01-25T06:23:10Z) - Representation Learning for Sequence Data with Deep Autoencoding
Predictive Components [96.42805872177067]
We propose a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space.
We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step.
We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
arXiv Detail & Related papers (2020-10-07T03:34:01Z) - A General Framework for Consistent Structured Prediction with Implicit
Loss Embeddings [113.15416137912399]
We propose and analyze a novel theoretical and algorithmic framework for structured prediction.
We study a large class of loss functions that implicitly defines a suitable geometry on the problem.
When dealing with output spaces with infinite cardinality, a suitable implicit formulation of the estimator is shown to be crucial.
arXiv Detail & Related papers (2020-02-13T10:30:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.