Semi-Supervised Laplacian Learning on Stiefel Manifolds
- URL: http://arxiv.org/abs/2308.00142v1
- Date: Mon, 31 Jul 2023 20:19:36 GMT
- Title: Semi-Supervised Laplacian Learning on Stiefel Manifolds
- Authors: Chester Holtz, Pengwen Chen, Alexander Cloninger, Chung-Kuan Cheng,
Gal Mishne
- Abstract summary: We reform the framework of nonplaceart generalization of a Lalacian graph.
We address the criticality centrality of supervised samples at low-label rates.
Our code is available onfootnoteanonymized for submission.
- Score: 67.29074577550405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motivated by the need to address the degeneracy of canonical Laplace learning
algorithms in low label rates, we propose to reformulate graph-based
semi-supervised learning as a nonconvex generalization of a \emph{Trust-Region
Subproblem} (TRS). This reformulation is motivated by the well-posedness of
Laplacian eigenvectors in the limit of infinite unlabeled data. To solve this
problem, we first show that a first-order condition implies the solution of a
manifold alignment problem and that solutions to the classical \emph{Orthogonal
Procrustes} problem can be used to efficiently find good classifiers that are
amenable to further refinement. Next, we address the criticality of selecting
supervised samples at low-label rates. We characterize informative samples with
a novel measure of centrality derived from the principal eigenvectors of a
certain submatrix of the graph Laplacian. We demonstrate that our framework
achieves lower classification error compared to recent state-of-the-art and
classical semi-supervised learning methods at extremely low, medium, and high
label rates. Our code is available on github\footnote{anonymized for
submission}.
Related papers
- Graph Laplacian for Semi-Supervised Learning [8.477619837043214]
We propose a new type of graph-Laplacian adapted for Semi-Supervised Learning (SSL) problems.
It is based on both density and contrastive measures and allows the encoding of the labeled data directly in the operator.
arXiv Detail & Related papers (2023-01-12T12:02:26Z) - Optimizing Diffusion Rate and Label Reliability in a Graph-Based
Semi-supervised Classifier [2.4366811507669124]
The Local and Global Consistency (LGC) algorithm is one of the most well-known graph-based semi-supervised (GSSL) classifiers.
We discuss how removing the self-influence of a labeled instance may be beneficial, and how it relates to leave-one-out error.
Within this framework, we propose methods to estimate label reliability and diffusion rate.
arXiv Detail & Related papers (2022-01-10T16:58:52Z) - Model-Change Active Learning in Graph-Based Semi-Supervised Learning [7.208515071018781]
"Model-change" active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s)
We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.
arXiv Detail & Related papers (2021-10-14T21:47:10Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Gradient Descent in RKHS with Importance Labeling [58.79085525115987]
We study importance labeling problem, in which we are given many unlabeled data.
We propose a new importance labeling scheme that can effectively select an informative subset of unlabeled data.
arXiv Detail & Related papers (2020-06-19T01:55:00Z) - Structured Prediction with Partial Labelling through the Infimum Loss [85.4940853372503]
The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect.
This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one.
This paper provides a unified framework based on structured prediction and on the concept of infimum loss to deal with partial labelling.
arXiv Detail & Related papers (2020-03-02T13:59:41Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.