Graph Laplacian for Semi-Supervised Learning
- URL: http://arxiv.org/abs/2301.04956v2
- Date: Wed, 19 Apr 2023 12:51:31 GMT
- Title: Graph Laplacian for Semi-Supervised Learning
- Authors: Or Streicher and Guy Gilboa
- Abstract summary: We propose a new type of graph-Laplacian adapted for Semi-Supervised Learning (SSL) problems.
It is based on both density and contrastive measures and allows the encoding of the labeled data directly in the operator.
- Score: 8.477619837043214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semi-supervised learning is highly useful in common scenarios where labeled
data is scarce but unlabeled data is abundant. The graph (or nonlocal)
Laplacian is a fundamental smoothing operator for solving various learning
tasks. For unsupervised clustering, a spectral embedding is often used, based
on graph-Laplacian eigenvectors. For semi-supervised problems, the common
approach is to solve a constrained optimization problem, regularized by a
Dirichlet energy, based on the graph-Laplacian. However, as supervision
decreases, Dirichlet optimization becomes suboptimal. We therefore would like
to obtain a smooth transition between unsupervised clustering and
low-supervised graph-based classification. In this paper, we propose a new type
of graph-Laplacian which is adapted for Semi-Supervised Learning (SSL)
problems. It is based on both density and contrastive measures and allows the
encoding of the labeled data directly in the operator. Thus, we can perform
successfully semi-supervised learning using spectral clustering. The benefits
of our approach are illustrated for several SSL problems.
Related papers
- Open-World Semi-Supervised Learning for Node Classification [53.07866559269709]
Open-world semi-supervised learning (Open-world SSL) for node classification is a practical but under-explored problem in the graph community.
We propose an IMbalance-Aware method named OpenIMA for Open-world semi-supervised node classification.
arXiv Detail & Related papers (2024-03-18T05:12:54Z) - Laplacian Canonization: A Minimalist Approach to Sign and Basis
Invariant Spectral Embedding [36.61907023057978]
Spectral embedding is a powerful graph computation technique that has received a lot of attention recently due to its effectiveness on Graph Transformers.
Previous methods developed costly approaches to learn new invariants and suffer from high complexity.
In this work, we explore a minimal approach that resolves the ambiguity issues by directly finding canonical directions for the eigenvectors.
arXiv Detail & Related papers (2023-10-28T14:35:10Z) - Semi-Supervised Laplace Learning on Stiefel Manifolds [48.3427853588646]
We develop the framework Sequential Subspace for graph-based, supervised samples at low-label rates.
We achieves that our methods at extremely low rates, and high label rates.
arXiv Detail & Related papers (2023-07-31T20:19:36Z) - One-step Bipartite Graph Cut: A Normalized Formulation and Its
Application to Scalable Subspace Clustering [56.81492360414741]
We show how to enforce a one-step normalized cut for bipartite graphs, especially with linear-time complexity.
In this paper, we first characterize a novel one-step bipartite graph cut criterion with normalized constraints, and theoretically prove its equivalence to a trace problem.
We extend this cut criterion to a scalable subspace clustering approach, where adaptive anchor learning, bipartite graph learning, and one-step normalized bipartite graph partitioning are simultaneously modeled.
arXiv Detail & Related papers (2023-05-12T11:27:20Z) - BASiS: Batch Aligned Spectral Embedding Space [7.176107039687231]
Spectral graph theory has been shown to provide powerful algorithms, backed by solid linear algebra theory.
We propose a different approach of directly learning the eigensapce.
We show that our learnt spectral embedding is better in terms of NMI, ACC, Grassman distance, orthogonality and classification accuracy, compared to SOTA.
arXiv Detail & Related papers (2022-11-30T13:07:59Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Model-Change Active Learning in Graph-Based Semi-Supervised Learning [5.174023161939957]
"Model Change" active learning quantifies the resulting change by introducing the additional label(s)
We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.
arXiv Detail & Related papers (2021-10-14T21:47:10Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Analysis of label noise in graph-based semi-supervised learning [2.4366811507669124]
In machine learning, one must acquire labels to help supervise a model that will be able to generalize to unseen data.
It is often the case that most of our data is unlabeled.
Semi-supervised learning (SSL) alleviates that by making strong assumptions about the relation between the labels and the input data distribution.
arXiv Detail & Related papers (2020-09-27T22:13:20Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.