Learning representations that are closed-form Monge mapping optimal with
application to domain adaptation
- URL: http://arxiv.org/abs/2305.07500v2
- Date: Fri, 11 Aug 2023 10:49:52 GMT
- Title: Learning representations that are closed-form Monge mapping optimal with
application to domain adaptation
- Authors: Oliver Struckmeier, Ievgen Redko, Anton Mallasto, Karol Arndt, Markus
Heinonen, Ville Kyrki
- Abstract summary: Optimal transport (OT) is a powerful tool used to compare and align probability measures following the least effort principle.
Despite its widespread use in machine learning (ML), OT problem still bears its computational burden.
We propose to tackle these challenges using representation learning.
- Score: 24.258758784011572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimal transport (OT) is a powerful geometric tool used to compare and align
probability measures following the least effort principle. Despite its
widespread use in machine learning (ML), OT problem still bears its
computational burden, while at the same time suffering from the curse of
dimensionality for measures supported on general high-dimensional spaces. In
this paper, we propose to tackle these challenges using representation
learning. In particular, we seek to learn an embedding space such that the
samples of the two input measures become alignable in it with a simple affine
mapping that can be calculated efficiently in closed-form. We then show that
such approach leads to results that are comparable to solving the original OT
problem when applied to the transfer learning task on which many OT baselines
where previously evaluated in both homogeneous and heterogeneous DA settings.
The code for our contribution is available at
\url{https://github.com/Oleffa/LaOT}.
Related papers
- Disentangled Representation Learning with the Gromov-Monge Gap [65.73194652234848]
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning.
We introduce a novel approach to disentangled representation learning based on quadratic optimal transport.
We demonstrate the effectiveness of our approach for quantifying disentanglement across four standard benchmarks.
arXiv Detail & Related papers (2024-07-10T16:51:32Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Bi-level Alignment for Cross-Domain Crowd Counting [113.78303285148041]
Current methods rely on external data for training an auxiliary task or apply an expensive coarse-to-fine estimation.
We develop a new adversarial learning based method, which is simple and efficient to apply.
We evaluate our approach on five real-world crowd counting benchmarks, where we outperform existing approaches by a large margin.
arXiv Detail & Related papers (2022-05-12T02:23:25Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Efficient Robust Optimal Transport with Application to Multi-Label
Classification [12.521494095948068]
We model the feature-feature relationship via a symmetric positive semi-definite Mahalanobis metric in the OT cost function.
We view the resulting optimization problem as a non-linear OT problem, which we solve using the Frank-Wolfe algorithm.
Empirical results on the discriminative learning setting, such as tag prediction and multi-class classification, illustrate the good performance of our approach.
arXiv Detail & Related papers (2020-10-22T16:43:52Z) - Robust Optimal Transport with Applications in Generative Modeling and
Domain Adaptation [120.69747175899421]
Optimal Transport (OT) distances such as Wasserstein have been used in several areas such as GANs and domain adaptation.
We propose a computationally-efficient dual form of the robust OT optimization that is amenable to modern deep learning applications.
Our approach can train state-of-the-art GAN models on noisy datasets corrupted with outlier distributions.
arXiv Detail & Related papers (2020-10-12T17:13:40Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Regularized Optimal Transport is Ground Cost Adversarial [34.81915836064636]
We show that regularization of the optimal transport problem can be interpreted as ground cost adversarial.
This gives access to a robust dissimilarity measure on the ground space, which can in turn be used in other applications.
arXiv Detail & Related papers (2020-02-10T17:28:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.