Label Propagation Through Optimal Transport
- URL: http://arxiv.org/abs/2110.01446v1
- Date: Fri, 1 Oct 2021 11:25:55 GMT
- Title: Label Propagation Through Optimal Transport
- Authors: Mourad El Hamri, Youn\`es Bennani, Issam Falih
- Abstract summary: We tackle the transductive semi-supervised learning problem that aims to obtain label predictions for the given unlabeled data points.
Our proposed approach is based on optimal transport, a mathematical theory that has been successfully used to address various machine learning problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we tackle the transductive semi-supervised learning problem
that aims to obtain label predictions for the given unlabeled data points
according to Vapnik's principle. Our proposed approach is based on optimal
transport, a mathematical theory that has been successfully used to address
various machine learning problems, and is starting to attract renewed interest
in semi-supervised learning community. The proposed approach, Optimal Transport
Propagation (OTP), performs in an incremental process, label propagation
through the edges of a complete bipartite edge-weighted graph, whose affinity
matrix is constructed from the optimal transport plan between empirical
measures defined on labeled and unlabeled data. OTP ensures a high degree of
predictions certitude by controlling the propagation process using a certainty
score based on Shannon's entropy. We also provide a convergence analysis of our
algorithm. Experiments task show the superiority of the proposed approach over
the state-of-the-art. We make our code publicly available.
Related papers
- Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Information-Theoretic Generalization Bounds for Transductive Learning and its Applications [16.408850979966623]
We develop generalization bounds for transductive learning algorithms in the context of information theory and PAC-Bayesian theory.
Our theoretic results are validated on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-11-08T09:48:42Z) - Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior:
From Theory to Practice [54.03076395748459]
A central question in the meta-learning literature is how to regularize to ensure generalization to unseen tasks.
We present a generalization bound for meta-learning, which was first derived by Rothfuss et al.
We provide a theoretical analysis and empirical case study under which conditions and to what extent these guarantees for meta-learning improve upon PAC-Bayesian per-task learning bounds.
arXiv Detail & Related papers (2022-11-14T08:51:04Z) - Sequential Information Design: Markov Persuasion Process and Its
Efficient Reinforcement Learning [156.5667417159582]
This paper proposes a novel model of sequential information design, namely the Markov persuasion processes (MPPs)
Planning in MPPs faces the unique challenge in finding a signaling policy that is simultaneously persuasive to the myopic receivers and inducing the optimal long-term cumulative utilities of the sender.
We design a provably efficient no-regret learning algorithm, the Optimism-Pessimism Principle for Persuasion Process (OP4), which features a novel combination of both optimism and pessimism principles.
arXiv Detail & Related papers (2022-02-22T05:41:43Z) - GAN Estimation of Lipschitz Optimal Transport Maps [0.0]
This paper introduces the first statistically consistent estimator of the optimal transport map between two probability distributions, based on neural networks.
We demonstrate that, under regularity assumptions, the obtained generator converges uniformly to the optimal transport map as the sample size increases to infinity.
In contrast to previous work tackling either statistical guarantees or practicality, we provide an expressive and feasible estimator which paves way for optimal transport applications.
arXiv Detail & Related papers (2022-02-16T10:15:56Z) - Inductive Semi-supervised Learning Through Optimal Transport [0.0]
The proposed approach, called Optimal Transport Induction (OTI), extends efficiently an optimal transport based transductive algorithm (OTP) to inductive tasks.
A series of experiments are conducted on several datasets in order to compare the proposed approach with state-of-the-art methods.
arXiv Detail & Related papers (2021-12-14T09:52:01Z) - High-Dimensional Bayesian Optimisation with Variational Autoencoders and
Deep Metric Learning [119.91679702854499]
We introduce a method based on deep metric learning to perform Bayesian optimisation over high-dimensional, structured input spaces.
We achieve such an inductive bias using just 1% of the available labelled data.
As an empirical contribution, we present state-of-the-art results on real-world high-dimensional black-box optimisation problems.
arXiv Detail & Related papers (2021-06-07T13:35:47Z) - Incremental Semi-Supervised Learning Through Optimal Transport [0.0]
We propose a novel approach for the transductive semi-supervised learning, using a complete bipartite edge-weighted graph.
The proposed approach uses the regularized optimal transport between empirical measures defined on labelled and unlabelled data points in order to obtain an affinity matrix from the optimal transport plan.
arXiv Detail & Related papers (2021-03-22T15:31:53Z) - Cyclic Label Propagation for Graph Semi-supervised Learning [52.102251202186025]
We introduce a novel framework for graph semi-supervised learning called CycProp.
CycProp integrates GNNs into the process of label propagation in a cyclic and mutually reinforcing manner.
In particular, our proposed CycProp updates the node embeddings learned by GNN module with the augmented information by label propagation.
arXiv Detail & Related papers (2020-11-24T02:55:40Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Statistical Optimal Transport posed as Learning Kernel Embedding [0.0]
This work takes the novel approach of posing statistical Optimal Transport (OT) as that of learning the transport plan's kernel mean embedding from sample based estimates of marginal embeddings.
A key result is that, under very mild conditions, $epsilon$-optimal recovery of the transport plan as well as the Barycentric-projection based transport map is possible with a sample complexity that is completely dimension-free.
arXiv Detail & Related papers (2020-02-08T14:58:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.