Revisiting invariances and introducing priors in Gromov-Wasserstein
distances
- URL: http://arxiv.org/abs/2307.10093v1
- Date: Wed, 19 Jul 2023 16:00:29 GMT
- Title: Revisiting invariances and introducing priors in Gromov-Wasserstein
distances
- Authors: Pinar Demetci, Quang Huy Tran, Ievgen Redko, Ritambhara Singh
- Abstract summary: We propose a new optimal transport-based distance, called Augmented Gromov-Wasserstein.
It allows for some control over the level of rigidity to transformations.
It also incorporates feature alignments, enabling us to better leverage prior knowledge on the input data for improved performance.
- Score: 8.724900618917095
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Gromov-Wasserstein distance has found many applications in machine learning
due to its ability to compare measures across metric spaces and its invariance
to isometric transformations. However, in certain applications, this invariance
property can be too flexible, thus undesirable. Moreover, the
Gromov-Wasserstein distance solely considers pairwise sample similarities in
input datasets, disregarding the raw feature representations. We propose a new
optimal transport-based distance, called Augmented Gromov-Wasserstein, that
allows for some control over the level of rigidity to transformations. It also
incorporates feature alignments, enabling us to better leverage prior knowledge
on the input data for improved performance. We present theoretical insights
into the proposed metric. We then demonstrate its usefulness for single-cell
multi-omic alignment tasks and a transfer learning scenario in machine
learning.
Related papers
- Scalable unsupervised alignment of general metric and non-metric structures [21.29255788365408]
Aligning data from different domains is a fundamental problem in machine learning with broad applications across very different areas.
We learn a related well-scalable linear assignment problem (LAP) whose solution is also a minimizer of the quadratic assignment problem (QAP)
We evaluate our approach on synthetic and real datasets from single-cell multiomics and neural latent spaces.
arXiv Detail & Related papers (2024-06-19T12:54:03Z) - Graph-based Virtual Sensing from Sparse and Partial Multivariate
Observations [22.567497617912046]
We introduce a novel graph-based methodology to exploit such relationships and design a graph deep learning architecture, named GgNet, implementing the framework.
The proposed approach relies on propagating information over a nested graph structure that is used to learn dependencies between variables as well as locations.
GgNet is extensively evaluated under different virtual sensing scenarios, demonstrating higher reconstruction accuracy compared to the state-of-the-art.
arXiv Detail & Related papers (2024-02-19T23:22:30Z) - Gromov-Wassertein-like Distances in the Gaussian Mixture Models Space [5.052293146674793]
The Gromov-Wasserstein (GW) distance is frequently used in machine learning to compare distributions across distinct metric spaces.
Recently, a novel Wasserstein distance specifically tailored for Gaussian mixture models and known as MW (mixture Wasserstein) has been introduced.
This paper aims to extend MW by introducing new Gromov-type distances.
arXiv Detail & Related papers (2023-10-17T13:22:36Z) - GaitMorph: Transforming Gait by Optimally Transporting Discrete Codes [6.85316573653194]
We propose GaitMorph, a novel method to modify the walking variation for an input gait sequence.
Our method entails the training of a high-compression model for gait skeleton sequences that leverages unlabelled data.
We propose a method based on optimal transport theory to learn latent transport maps on the discrete codebook that morph gait sequences between variations.
arXiv Detail & Related papers (2023-07-27T09:09:28Z) - Particle-Based Score Estimation for State Space Model Learning in
Autonomous Driving [62.053071723903834]
Multi-object state estimation is a fundamental problem for robotic applications.
We consider learning maximum-likelihood parameters using particle methods.
We apply our method to real data collected from autonomous vehicles.
arXiv Detail & Related papers (2022-12-14T01:21:05Z) - Gromov-Wasserstein Autoencoders [36.656435006076975]
We propose a novel representation learning method, Gromov-Wasserstein Autoencoders (GWAE)
Instead of a likelihood-based objective, GWAE models have a trainable prior optimized by minimizing the Gromov-Wasserstein (GW) metric.
By restricting the family of the trainable prior, we can introduce meta-priors to control latent representations for downstream tasks.
arXiv Detail & Related papers (2022-09-15T02:34:39Z) - Learning Instance-Specific Augmentations by Capturing Local Invariances [62.70897571389785]
InstaAug is a method for automatically learning input-specific augmentations from data.
We empirically demonstrate that InstaAug learns meaningful input-dependent augmentations for a wide range of transformation classes.
arXiv Detail & Related papers (2022-05-31T18:38:06Z) - Hyperbolic Vision Transformers: Combining Improvements in Metric
Learning [116.13290702262248]
We propose a new hyperbolic-based model for metric learning.
At the core of our method is a vision transformer with output embeddings mapped to hyperbolic space.
We evaluate the proposed model with six different formulations on four datasets.
arXiv Detail & Related papers (2022-03-21T09:48:23Z) - DA-Transformer: Distance-aware Transformer [87.20061062572391]
DA-Transformer is a distance-aware Transformer that can exploit the real distance.
In this paper, we propose DA-Transformer, which is a distance-aware Transformer that can exploit the real distance.
arXiv Detail & Related papers (2020-10-14T10:09:01Z) - Multi-scale Interactive Network for Salient Object Detection [91.43066633305662]
We propose the aggregate interaction modules to integrate the features from adjacent levels.
To obtain more efficient multi-scale features, the self-interaction modules are embedded in each decoder unit.
Experimental results on five benchmark datasets demonstrate that the proposed method without any post-processing performs favorably against 23 state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-17T15:41:37Z) - Meta-Learning Symmetries by Reparameterization [63.85144439337671]
We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data.
Our experiments suggest that it can automatically learn to encode equivariances to common transformations used in image processing tasks.
arXiv Detail & Related papers (2020-07-06T17:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.