Incremental Semi-Supervised Learning Through Optimal Transport
- URL: http://arxiv.org/abs/2103.11937v1
- Date: Mon, 22 Mar 2021 15:31:53 GMT
- Title: Incremental Semi-Supervised Learning Through Optimal Transport
- Authors: Mourad El Hamri, Youn\`es Bennani
- Abstract summary: We propose a novel approach for the transductive semi-supervised learning, using a complete bipartite edge-weighted graph.
The proposed approach uses the regularized optimal transport between empirical measures defined on labelled and unlabelled data points in order to obtain an affinity matrix from the optimal transport plan.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semi-supervised learning provides an effective paradigm for leveraging
unlabeled data to improve a model\s performance. Among the many strategies
proposed, graph-based methods have shown excellent properties, in particular
since they allow to solve directly the transductive tasks according to Vapnik\s
principle and they can be extended efficiently for inductive tasks. In this
paper, we propose a novel approach for the transductive semi-supervised
learning, using a complete bipartite edge-weighted graph. The proposed approach
uses the regularized optimal transport between empirical measures defined on
labelled and unlabelled data points in order to obtain an affinity matrix from
the optimal transport plan. This matrix is further used to propagate labels
through the vertices of the graph in an incremental process ensuring the
certainty of the predictions by incorporating a certainty score based on
Shannon\s entropy. We also analyze the convergence of our approach and we
derive an efficient way to extend it for out-of-sample data. Experimental
analysis was used to compare the proposed approach with other label propagation
algorithms on 12 benchmark datasets, for which we surpass state-of-the-art
results. We release our code.
Related papers
- Graph-Based Semi-Supervised Segregated Lipschitz Learning [0.21847754147782888]
This paper presents an approach to semi-supervised learning for the classification of data using the Lipschitz Learning on graphs.
We develop a graph-based semi-supervised learning framework that leverages the properties of the infinity Laplacian to propagate labels in a dataset where only a few samples are labeled.
arXiv Detail & Related papers (2024-11-05T17:16:56Z) - Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - PG-LBO: Enhancing High-Dimensional Bayesian Optimization with
Pseudo-Label and Gaussian Process Guidance [31.585328335396607]
Current mainstream methods overlook the potential of utilizing a pool of unlabeled data to construct the latent space.
We propose a novel method to effectively utilize unlabeled data with the guidance of labeled data.
Our proposed method outperforms existing VAE-BO algorithms in various optimization scenarios.
arXiv Detail & Related papers (2023-12-28T11:57:58Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Inductive Semi-supervised Learning Through Optimal Transport [0.0]
The proposed approach, called Optimal Transport Induction (OTI), extends efficiently an optimal transport based transductive algorithm (OTP) to inductive tasks.
A series of experiments are conducted on several datasets in order to compare the proposed approach with state-of-the-art methods.
arXiv Detail & Related papers (2021-12-14T09:52:01Z) - Label Propagation Through Optimal Transport [0.0]
We tackle the transductive semi-supervised learning problem that aims to obtain label predictions for the given unlabeled data points.
Our proposed approach is based on optimal transport, a mathematical theory that has been successfully used to address various machine learning problems.
arXiv Detail & Related papers (2021-10-01T11:25:55Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.