Efficiently Learning the Graph for Semi-supervised Learning
- URL: http://arxiv.org/abs/2306.07098v1
- Date: Mon, 12 Jun 2023 13:22:06 GMT
- Title: Efficiently Learning the Graph for Semi-supervised Learning
- Authors: Dravyansh Sharma, Maxwell Jones
- Abstract summary: We show how to learn the best graphs from the sparse families efficiently using the conjugate gradient method.
Our approach can also be used to learn the graph efficiently online with sub-linear regret, under mild smoothness assumptions.
We implement our approach and demonstrate significant ($sim$10-100x) speedups over prior work on semi-supervised learning with learned graphs on benchmark datasets.
- Score: 4.518012967046983
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational efficiency is a major bottleneck in using classic graph-based
approaches for semi-supervised learning on datasets with a large number of
unlabeled examples. Known techniques to improve efficiency typically involve an
approximation of the graph regularization objective, but suffer two major
drawbacks - first the graph is assumed to be known or constructed with
heuristic hyperparameter values, second they do not provide a principled
approximation guarantee for learning over the full unlabeled dataset. Building
on recent work on learning graphs for semi-supervised learning from multiple
datasets for problems from the same domain, and leveraging techniques for fast
approximations for solving linear systems in the graph Laplacian matrix, we
propose algorithms that overcome both the above limitations.
We show a formal separation in the learning-theoretic complexity of sparse
and dense graph families. We further show how to approximately learn the best
graphs from the sparse families efficiently using the conjugate gradient
method.
Our approach can also be used to learn the graph efficiently online with
sub-linear regret, under mild smoothness assumptions. Our online learning
results are stated generally, and may be useful for approximate and efficient
parameter tuning in other problems. We implement our approach and demonstrate
significant ($\sim$10-100x) speedups over prior work on semi-supervised
learning with learned graphs on benchmark datasets.
Related papers
- Improved Graph-based semi-supervised learning Schemes [0.0]
In this work, we improve the accuracy of several known algorithms to address the classification of large datasets when few labels are available.
Our framework lies in the realm of graph-based semi-supervised learning.
arXiv Detail & Related papers (2024-06-30T16:50:08Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - Condensing Graphs via One-Step Gradient Matching [50.07587238142548]
We propose a one-step gradient matching scheme, which performs gradient matching for only one single step without training the network weights.
Our theoretical analysis shows this strategy can generate synthetic graphs that lead to lower classification loss on real graphs.
In particular, we are able to reduce the dataset size by 90% while approximating up to 98% of the original performance.
arXiv Detail & Related papers (2022-06-15T18:20:01Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Learning Graphs from Smooth Signals under Moment Uncertainty [23.868075779606425]
We consider the problem of inferring the graph structure from a given set of graph signals.
Traditional graph learning models do not take this distributional uncertainty into account.
arXiv Detail & Related papers (2021-05-12T06:47:34Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Data driven algorithms for limited labeled data learning [35.193000364580975]
We focus on graph-based techniques, where the unlabeled examples are connected in a graph under the implicit assumption that similar nodes likely have similar labels.
We present a novel data driven approach for learning the graph and provide strong formal guarantees in both the distributional and online learning formalizations.
arXiv Detail & Related papers (2021-03-18T22:19:19Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Learning Product Graphs Underlying Smooth Graph Signals [15.023662220197242]
This paper devises a method to learn structured graphs from data that are given in the form of product graphs.
To this end, first the graph learning problem is posed as a linear program, which (on average) outperforms the state-of-the-art graph learning algorithms.
arXiv Detail & Related papers (2020-02-26T03:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.