An Information-Geometric Distance on the Space of Tasks
- URL: http://arxiv.org/abs/2011.00613v2
- Date: Thu, 25 Feb 2021 03:33:30 GMT
- Title: An Information-Geometric Distance on the Space of Tasks
- Authors: Yansong Gao and Pratik Chaudhari
- Abstract summary: This paper prescribes a distance between learning tasks modeled as joint distributions on data and labels.
We develop an algorithm to compute the distance which iteratively transports the marginal on the data of the source task to that of the target task.
We perform thorough empirical validation and analysis across diverse image classification datasets to show that the coupled transfer distance correlates strongly with the difficulty of fine-tuning.
- Score: 31.359578768463752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper prescribes a distance between learning tasks modeled as joint
distributions on data and labels. Using tools in information geometry, the
distance is defined to be the length of the shortest weight trajectory on a
Riemannian manifold as a classifier is fitted on an interpolated task. The
interpolated task evolves from the source to the target task using an optimal
transport formulation. This distance, which we call the "coupled transfer
distance" can be compared across different classifier architectures. We develop
an algorithm to compute the distance which iteratively transports the marginal
on the data of the source task to that of the target task while updating the
weights of the classifier to track this evolving data distribution. We develop
theory to show that our distance captures the intuitive idea that a good
transfer trajectory is the one that keeps the generalization gap small during
transfer, in particular at the end on the target task. We perform thorough
empirical validation and analysis across diverse image classification datasets
to show that the coupled transfer distance correlates strongly with the
difficulty of fine-tuning.
Related papers
- DeTra: A Unified Model for Object Detection and Trajectory Forecasting [68.85128937305697]
Our approach formulates the union of the two tasks as a trajectory refinement problem.
To tackle this unified task, we design a refinement transformer that infers the presence, pose, and multi-modal future behaviors of objects.
In our experiments, we observe that ourmodel outperforms the state-of-the-art on Argoverse 2 Sensor and Open dataset.
arXiv Detail & Related papers (2024-06-06T18:12:04Z) - Geometrically Aligned Transfer Encoder for Inductive Transfer in
Regression Tasks [5.038936775643437]
We propose a novel transfer technique based on differential geometry, namely the Geometrically Aligned Transfer (GATE)
We find a proper diffeomorphism between pairs of tasks to ensure that every arbitrary point maps to a locally flat coordinate in the overlapping region, allowing the transfer of knowledge from the source to the target data.
GATE outperforms conventional methods and exhibits stable behavior in both the latent space and extrapolation regions for various molecular graph datasets.
arXiv Detail & Related papers (2023-10-10T07:11:25Z) - Optimal transfer protocol by incremental layer defrosting [66.76153955485584]
Transfer learning is a powerful tool enabling model training with limited amounts of data.
The simplest transfer learning protocol is based on freezing" the feature-extractor layers of a network pre-trained on a data-rich source task.
We show that this protocol is often sub-optimal and the largest performance gain may be achieved when smaller portions of the pre-trained network are kept frozen.
arXiv Detail & Related papers (2023-03-02T17:32:11Z) - Transferability Estimation Based On Principal Gradient Expectation [68.97403769157117]
Cross-task transferability is compatible with transferred results while keeping self-consistency.
Existing transferability metrics are estimated on the particular model by conversing source and target tasks.
We propose Principal Gradient Expectation (PGE), a simple yet effective method for assessing transferability across tasks.
arXiv Detail & Related papers (2022-11-29T15:33:02Z) - Curriculum Reinforcement Learning using Optimal Transport via Gradual
Domain Adaptation [46.103426976842336]
Reinforcement Learning (CRL) aims to create a sequence of tasks, starting from easy ones and gradually learning towards difficult tasks.
In this work, we focus on the idea of framing CRL as Curriculums between a source (auxiliary) and a target task distribution.
Inspired by the insights from gradual domain adaptation in semi-supervised learning, we create a natural curriculum by breaking down the potentially large task distributional shift in CRL into smaller shifts.
arXiv Detail & Related papers (2022-10-18T22:33:33Z) - InfoOT: Information Maximizing Optimal Transport [58.72713603244467]
InfoOT is an information-theoretic extension of optimal transport.
It maximizes the mutual information between domains while minimizing geometric distances.
This formulation yields a new projection method that is robust to outliers and generalizes to unseen samples.
arXiv Detail & Related papers (2022-10-06T18:55:41Z) - Wasserstein Task Embedding for Measuring Task Similarities [14.095478018850374]
Measuring similarities between different tasks is critical in a broad spectrum of machine learning problems.
We leverage the optimal transport theory and define a novel task embedding for supervised classification.
We show that the proposed embedding leads to a significantly faster comparison of tasks compared to related approaches.
arXiv Detail & Related papers (2022-08-24T18:11:04Z) - Ranking Distance Calibration for Cross-Domain Few-Shot Learning [91.22458739205766]
Recent progress in few-shot learning promotes a more realistic cross-domain setting.
Due to the domain gap and disjoint label spaces between source and target datasets, their shared knowledge is extremely limited.
We employ a re-ranking process for calibrating a target distance matrix by discovering the reciprocal k-nearest neighbours within the task.
arXiv Detail & Related papers (2021-12-01T03:36:58Z) - Spatial Pyramid Based Graph Reasoning for Semantic Segmentation [67.47159595239798]
We apply graph convolution into the semantic segmentation task and propose an improved Laplacian.
The graph reasoning is directly performed in the original feature space organized as a spatial pyramid.
We achieve comparable performance with advantages in computational and memory overhead.
arXiv Detail & Related papers (2020-03-23T12:28:07Z) - Geometric Dataset Distances via Optimal Transport [15.153110906331733]
We propose an alternative notion of distance between datasets that (i) is model-agnostic, (ii) does not involve training, (iii) can compare datasets even if their label sets are completely disjoint and (iv) has solid theoretical footing.
This distance relies on optimal transport, which provides it with rich geometry awareness, interpretable correspondences and well-understood properties.
Our results show that this novel distance provides meaningful comparison of datasets, and correlates well with transfer learning hardness across various experimental settings and datasets.
arXiv Detail & Related papers (2020-02-07T17:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.