Neural Stochastic Block Model & Scalable Community-Based Graph Learning
- URL: http://arxiv.org/abs/2005.07855v1
- Date: Sat, 16 May 2020 03:28:50 GMT
- Title: Neural Stochastic Block Model & Scalable Community-Based Graph Learning
- Authors: Zheng Chen, Xinli Yu, Yuan Ling, Xiaohua Hu
- Abstract summary: This paper proposes a scalable community-based neural framework for graph learning.
The framework learns the graph topology through the task of community detection and link prediction.
We look into two particular applications, the graph alignment and the anomalous correlation detection.
- Score: 8.00785050036369
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper proposes a novel scalable community-based neural framework for
graph learning. The framework learns the graph topology through the task of
community detection and link prediction by optimizing with our proposed joint
SBM loss function, which results from a non-trivial adaptation of the
likelihood function of the classic Stochastic Block Model (SBM). Compared with
SBM, our framework is flexible, naturally allows soft labels and digestion of
complex node attributes. The main goal is efficient valuation of complex graph
data, therefore our design carefully aims at accommodating large data, and
ensures there is a single forward pass for efficient evaluation. For large
graph, it remains an open problem of how to efficiently leverage its underlying
structure for various graph learning tasks. Previously it can be heavy work.
With our community-based framework, this becomes less difficult and allows the
task models to basically plug-in-and-play and perform joint training. We
currently look into two particular applications, the graph alignment and the
anomalous correlation detection, and discuss how to make use of our framework
to tackle both problems. Extensive experiments are conducted to demonstrate the
effectiveness of our approach. We also contributed tweaks of classic techniques
which we find helpful for performance and scalability. For example, 1) the
GAT+, an improved design of GAT (Graph Attention Network), the scaled-cosine
similarity, and a unified implementation of the convolution/attention based and
the random-walk based neural graph models.
Related papers
- Amplify Graph Learning for Recommendation via Sparsity Completion [16.32861024767423]
Graph learning models have been widely deployed in collaborative filtering (CF) based recommendation systems.
Due to the issue of data sparsity, the graph structure of the original input lacks potential positive preference edges.
We propose an Amplify Graph Learning framework based on Sparsity Completion (called AGL-SC)
arXiv Detail & Related papers (2024-06-27T08:26:20Z) - A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Learnable Graph Matching: A Practical Paradigm for Data Association [74.28753343714858]
We propose a general learnable graph matching method to address these issues.
Our method achieves state-of-the-art performance on several MOT datasets.
For image matching, our method outperforms state-of-the-art methods on a popular indoor dataset, ScanNet.
arXiv Detail & Related papers (2023-03-27T17:39:00Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Deep Reinforcement Learning of Graph Matching [63.469961545293756]
Graph matching (GM) under node and pairwise constraints has been a building block in areas from optimization to computer vision.
We present a reinforcement learning solver for GM i.e. RGM that seeks the node correspondence between pairwise graphs.
Our method differs from the previous deep graph matching model in the sense that they are focused on the front-end feature extraction and affinity function learning.
arXiv Detail & Related papers (2020-12-16T13:48:48Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.