Tailoring: encoding inductive biases by optimizing unsupervised
objectives at prediction time
- URL: http://arxiv.org/abs/2009.10623v5
- Date: Mon, 6 Sep 2021 15:26:52 GMT
- Title: Tailoring: encoding inductive biases by optimizing unsupervised
objectives at prediction time
- Authors: Ferran Alet, Maria Bauza, Kenji Kawaguchi, Nurullah Giray Kuru, Tomas
Lozano-Perez, Leslie Pack Kaelbling
- Abstract summary: Adding auxiliary losses to the main objective function is a general way of encoding biases that can help networks learn better representations.
In this work we take inspiration from textittransductive learning and note that after receiving an input, we can fine-tune our networks on any unsupervised loss.
We formulate em meta-tailoring, a nested optimization similar to that in meta-learning, and train our models to perform well on the task objective after adapting them using an unsupervised loss.
- Score: 34.03150701567508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: From CNNs to attention mechanisms, encoding inductive biases into neural
networks has been a fruitful source of improvement in machine learning. Adding
auxiliary losses to the main objective function is a general way of encoding
biases that can help networks learn better representations. However, since
auxiliary losses are minimized only on training data, they suffer from the same
generalization gap as regular task losses. Moreover, by adding a term to the
loss function, the model optimizes a different objective than the one we care
about. In this work we address both problems: first, we take inspiration from
\textit{transductive learning} and note that after receiving an input but
before making a prediction, we can fine-tune our networks on any unsupervised
loss. We call this process {\em tailoring}, because we customize the model to
each input to ensure our prediction satisfies the inductive bias. Second, we
formulate {\em meta-tailoring}, a nested optimization similar to that in
meta-learning, and train our models to perform well on the task objective after
adapting them using an unsupervised loss. The advantages of tailoring and
meta-tailoring are discussed theoretically and demonstrated empirically on a
diverse set of examples.
Related papers
- TaskMet: Task-Driven Metric Learning for Model Learning [29.0053868393653]
Deep learning models are often deployed in downstream tasks that the training procedure may not be aware of.
We propose take the task loss signal one level deeper than the parameters of the model and use it to learn the parameters of the loss function the model is trained on.
This approach does not alter the optimal prediction model itself, but rather changes the model learning to emphasize the information important for the downstream task.
arXiv Detail & Related papers (2023-12-08T18:59:03Z) - Task-Robust Pre-Training for Worst-Case Downstream Adaptation [62.05108162160981]
Pre-training has achieved remarkable success when transferred to downstream tasks.
This paper considers pre-training a model that guarantees a uniformly good performance over the downstream tasks.
arXiv Detail & Related papers (2023-06-21T07:43:23Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Self-Supervised Learning via Maximum Entropy Coding [57.56570417545023]
We propose Maximum Entropy Coding (MEC) as a principled objective that explicitly optimize on the structure of the representation.
MEC learns a more generalizable representation than previous methods based on specific pretext tasks.
It achieves state-of-the-art performance consistently on various downstream tasks, including not only ImageNet linear probe, but also semi-supervised classification, object detection, instance segmentation, and object tracking.
arXiv Detail & Related papers (2022-10-20T17:58:30Z) - Towards Sample-efficient Overparameterized Meta-learning [37.676063120293044]
An overarching goal in machine learning is to build a generalizable model with few samples.
This paper aims to demystify over parameterization for meta-learning.
We show that learning the optimal representation coincides with the problem of designing a task-aware regularization.
arXiv Detail & Related papers (2022-01-16T21:57:17Z) - Noether Networks: Meta-Learning Useful Conserved Quantities [46.88551280525578]
We propose Noether Networks: a new type of architecture where a meta-learned conservation loss is optimized inside the prediction function.
We show, theoretically and experimentally, that Noether Networks improve prediction quality, providing a general framework for discovering inductive biases in sequential problems.
arXiv Detail & Related papers (2021-12-06T19:27:43Z) - Improved Fine-tuning by Leveraging Pre-training Data: Theory and
Practice [52.11183787786718]
Fine-tuning a pre-trained model on the target data is widely used in many deep learning applications.
Recent studies have empirically shown that training from scratch has the final performance that is no worse than this pre-training strategy.
We propose a novel selection strategy to select a subset from pre-training data to help improve the generalization on the target task.
arXiv Detail & Related papers (2021-11-24T06:18:32Z) - Mixing between the Cross Entropy and the Expectation Loss Terms [89.30385901335323]
Cross entropy loss tends to focus on hard to classify samples during training.
We show that adding to the optimization goal the expectation loss helps the network to achieve better accuracy.
Our experiments show that the new training protocol improves performance across a diverse set of classification domains.
arXiv Detail & Related papers (2021-09-12T23:14:06Z) - Dissecting Supervised Constrastive Learning [24.984074794337157]
Minimizing cross-entropy over the softmax scores of a linear map composed with a high-capacity encoder is arguably the most popular choice for training neural networks on supervised learning tasks.
We show that one can directly optimize the encoder instead, to obtain equally (or even more) discriminative representations via a supervised variant of a contrastive objective.
arXiv Detail & Related papers (2021-02-17T15:22:38Z) - More Is More -- Narrowing the Generalization Gap by Adding
Classification Heads [8.883733362171032]
We introduce an architecture enhancement for existing neural network models based on input transformations, termed 'TransNet'
Our model can be employed during training time only and then pruned for prediction, resulting in an equivalent architecture to the base model.
arXiv Detail & Related papers (2021-02-09T16:30:33Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.