A Structural-Clustering Based Active Learning for Graph Neural Networks
- URL: http://arxiv.org/abs/2312.04307v1
- Date: Thu, 7 Dec 2023 14:04:38 GMT
- Title: A Structural-Clustering Based Active Learning for Graph Neural Networks
- Authors: Ricky Maulana Fajri, Yulong Pei, Lu Yin, and Mykola Pechenizkiy
- Abstract summary: We propose the Structural-Clustering PageRank method for improved Active learning (SPA) specifically designed for graph-structured data.
SPA integrates community detection using the SCAN algorithm with the PageRank scoring method for efficient and informative sample selection.
- Score: 16.85038790429607
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In active learning for graph-structured data, Graph Neural Networks (GNNs)
have shown effectiveness. However, a common challenge in these applications is
the underutilization of crucial structural information. To address this
problem, we propose the Structural-Clustering PageRank method for improved
Active learning (SPA) specifically designed for graph-structured data. SPA
integrates community detection using the SCAN algorithm with the PageRank
scoring method for efficient and informative sample selection. SPA prioritizes
nodes that are not only informative but also central in structure. Through
extensive experiments, SPA demonstrates higher accuracy and macro-F1 score over
existing methods across different annotation budgets and achieves significant
reductions in query time. In addition, the proposed method only adds two
hyperparameters, $\epsilon$ and $\mu$ in the algorithm to finely tune the
balance between structural learning and node selection. This simplicity is a
key advantage in active learning scenarios, where extensive hyperparameter
tuning is often impractical.
Related papers
- Self-Supervised Contrastive Graph Clustering Network via Structural Information Fusion [15.293684479404092]
We propose a novel deep graph clustering method called CGCN.
Our approach introduces contrastive signals and deep structural information into the pre-training process.
Our method has been experimentally validated on multiple real-world graph datasets.
arXiv Detail & Related papers (2024-08-08T09:49:26Z) - Online Network Source Optimization with Graph-Kernel MAB [62.6067511147939]
We propose Grab-UCB, a graph- kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks.
We describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations.
We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy.
arXiv Detail & Related papers (2023-07-07T15:03:42Z) - Joint Feature and Differentiable $ k $-NN Graph Learning using Dirichlet
Energy [103.74640329539389]
We propose a deep FS method that simultaneously conducts feature selection and differentiable $ k $-NN graph learning.
We employ Optimal Transport theory to address the non-differentiability issue of learning $ k $-NN graphs in neural networks.
We validate the effectiveness of our model with extensive experiments on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-21T08:15:55Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Simple Contrastive Graph Clustering [41.396185271303956]
We propose a Simple Contrastive Graph Clustering (SCGC) algorithm to improve the existing methods.
Our algorithm outperforms the recent contrastive deep clustering competitors with at least seven times speedup on average.
arXiv Detail & Related papers (2022-05-11T06:45:19Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - A Differentiable Approach to Combinatorial Optimization using Dataless
Neural Networks [20.170140039052455]
We propose a radically different approach in that no data is required for training the neural networks that produce the solution.
In particular, we reduce the optimization problem to a neural network and employ a dataless training scheme to refine the parameters of the network such that those parameters yield the structure of interest.
arXiv Detail & Related papers (2022-03-15T19:21:31Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Ring Reservoir Neural Networks for Graphs [15.07984894938396]
Reservoir Computing models can play an important role in developing fruitful graph embeddings.
Our core proposal is based on shaping the organization of the hidden neurons to follow a ring topology.
Experimental results on graph classification tasks indicate that ring-reservoirs architectures enable particularly effective network configurations.
arXiv Detail & Related papers (2020-05-11T17:51:40Z) - Graph Neighborhood Attentive Pooling [0.5493410630077189]
Network representation learning (NRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and sparse graphs.
We propose a novel context-sensitive algorithm called GAP that learns to attend on different parts of a node's neighborhood using attentive pooling networks.
arXiv Detail & Related papers (2020-01-28T15:05:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.