Fast Semi-supervised Learning on Large Graphs: An Improved Green-function Method
- URL: http://arxiv.org/abs/2411.01792v1
- Date: Mon, 04 Nov 2024 04:27:18 GMT
- Title: Fast Semi-supervised Learning on Large Graphs: An Improved Green-function Method
- Authors: Feiping Nie, Yitao Song, Wei Chang, Rong Wang, Xuelong Li,
- Abstract summary: In graph-based semi-supervised learning, the Green-function method is a classical method that works by computing the Green's function in the graph space.
We make a detailed analysis on it and propose a novel method from the perspective of optimization.
Unlike the original method, our improved method can also apply two accelerating techniques, Gaussian Elimination, and Anchored Graphs.
- Score: 93.04936336359502
- License:
- Abstract: In the graph-based semi-supervised learning, the Green-function method is a classical method that works by computing the Green's function in the graph space. However, when applied to large graphs, especially those sparse ones, this method performs unstably and unsatisfactorily. We make a detailed analysis on it and propose a novel method from the perspective of optimization. On fully connected graphs, the method is equivalent to the Green-function method and can be seen as another interpretation with physical meanings, while on non-fully connected graphs, it helps to explain why the Green-function method causes a mess on large sparse graphs. To solve this dilemma, we propose a workable approach to improve our proposed method. Unlike the original method, our improved method can also apply two accelerating techniques, Gaussian Elimination, and Anchored Graphs to become more efficient on large graphs. Finally, the extensive experiments prove our conclusions and the efficiency, accuracy, and stability of our improved Green's function method.
Related papers
- Efficient Graph Coloring with Neural Networks: A Physics-Inspired Approach for Large Graphs [0.0]
We present a novel algorithm that leverages graph neural networks to tackle the problem efficiently, particularly for large graphs.
We demonstrate the effectiveness of our method on a dataset of Erdos-Renyi graphs, showing its applicability also in hard-to-solve connectivity regions.
arXiv Detail & Related papers (2024-08-02T18:02:51Z) - GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention [52.67633391931959]
Graph invariant learning (GIL) has been an effective approach to discovering the invariant relationships between graph data and its labels.
We propose a novel graph attention mechanism called Graph Sinkhorn Attention (GSINA)
GSINA is able to obtain meaningful, differentiable invariant subgraphs with controllable sparsity and softness.
arXiv Detail & Related papers (2024-02-11T12:57:16Z) - A Gromov--Wasserstein Geometric View of Spectrum-Preserving Graph
Coarsening [19.09507367362567]
This work studies graph coarsening from a different perspective, developing a theory for preserving graph distances.
The geometric approach is useful when working with a collection of graphs, such as in graph classification and regression.
Minimizing this difference can be done using the popular weighted kernel $K$-means method.
arXiv Detail & Related papers (2023-06-15T04:47:26Z) - Efficiently Learning the Graph for Semi-supervised Learning [4.518012967046983]
We show how to learn the best graphs from the sparse families efficiently using the conjugate gradient method.
Our approach can also be used to learn the graph efficiently online with sub-linear regret, under mild smoothness assumptions.
We implement our approach and demonstrate significant ($sim$10-100x) speedups over prior work on semi-supervised learning with learned graphs on benchmark datasets.
arXiv Detail & Related papers (2023-06-12T13:22:06Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Learning Heuristics for the Maximum Clique Enumeration Problem Using Low
Dimensional Representations [0.0]
We use a learning framework for a pruning process of the input graph towards reducing the clique of the maximum enumeration problem.
We study the role of using different vertex representations on the performance of this runtime method.
We observe that using local graph features in the classification process produce more accurate results when combined with a feature elimination process.
arXiv Detail & Related papers (2022-10-30T22:04:32Z) - Differentiable Mathematical Programming for Object-Centric
Representation Learning [32.76702197616919]
We propose to use minimum $s$-$t$ graph cuts as a partitioning method which is represented as a linear program.
To solve the graph cuts our solution relies on an efficient, scalable, and differentiable quadratic programming approximation.
Our results show that our approach is scalable and outperforms existing methods on object discovery tasks with textured scenes and objects.
arXiv Detail & Related papers (2022-10-05T11:36:45Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - Adversarial Graph Contrastive Learning with Information Regularization [51.14695794459399]
Contrastive learning is an effective method in graph representation learning.
Data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples.
We propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL)
It consistently outperforms the current graph contrastive learning methods in the node classification task over various real-world datasets.
arXiv Detail & Related papers (2022-02-14T05:54:48Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.