New Frontiers in Graph Autoencoders: Joint Community Detection and Link
Prediction
- URL: http://arxiv.org/abs/2211.08972v1
- Date: Wed, 16 Nov 2022 15:26:56 GMT
- Title: New Frontiers in Graph Autoencoders: Joint Community Detection and Link
Prediction
- Authors: Guillaume Salha-Galvan and Johannes F. Lutzeyer and George Dasoulas
and Romain Hennequin and Michalis Vazirgiannis
- Abstract summary: Graph autoencoders (GAE) and variational graph autoencoders (VGAE) emerged as powerful methods for link prediction (LP)
It is still unclear to what extent one can improve CD with GAE and VGAE, especially in the absence of node features.
We show that jointly addressing these two tasks with high accuracy is possible.
- Score: 27.570978996576503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph autoencoders (GAE) and variational graph autoencoders (VGAE) emerged as
powerful methods for link prediction (LP). Their performances are less
impressive on community detection (CD), where they are often outperformed by
simpler alternatives such as the Louvain method. It is still unclear to what
extent one can improve CD with GAE and VGAE, especially in the absence of node
features. It is moreover uncertain whether one could do so while simultaneously
preserving good performances on LP in a multi-task setting. In this workshop
paper, summarizing results from our journal publication (Salha-Galvan et al.
2022), we show that jointly addressing these two tasks with high accuracy is
possible. For this purpose, we introduce a community-preserving message passing
scheme, doping our GAE and VGAE encoders by considering both the initial graph
and Louvain-based prior communities when computing embedding spaces. Inspired
by modularity-based clustering, we further propose novel training and
optimization strategies specifically designed for joint LP and CD. We
demonstrate the empirical effectiveness of our approach, referred to as
Modularity-Aware GAE and VGAE, on various real-world graphs.
Related papers
- GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention [52.67633391931959]
Graph invariant learning (GIL) has been an effective approach to discovering the invariant relationships between graph data and its labels.
We propose a novel graph attention mechanism called Graph Sinkhorn Attention (GSINA)
GSINA is able to obtain meaningful, differentiable invariant subgraphs with controllable sparsity and softness.
arXiv Detail & Related papers (2024-02-11T12:57:16Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Contributions to Representation Learning with Graph Autoencoders and
Applications to Music Recommendation [1.2691047660244335]
Graph autoencoders (GAE) and variational graph autoencoders (VGAE) emerged as powerful groups of unsupervised node embedding methods.
At the beginning of this Ph.D. project, GAE and VGAE models were also suffering from key limitations, preventing them from being adopted in the industry.
We present several contributions to improve these models, with the general aim of facilitating their use to address industrial-level problems involving graph representations.
arXiv Detail & Related papers (2022-05-29T13:14:53Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased
Scene Graph Generation [62.96628432641806]
Scene Graph Generation aims to first encode the visual contents within the given image and then parse them into a compact summary graph.
We first present a novel Stacked Hybrid-Attention network, which facilitates the intra-modal refinement as well as the inter-modal interaction.
We then devise an innovative Group Collaborative Learning strategy to optimize the decoder.
arXiv Detail & Related papers (2022-03-18T09:14:13Z) - Modularity-Aware Graph Autoencoders for Joint Community Detection and
Link Prediction [27.570978996576503]
Graph autoencoders (GAE) and variational graph autoencoders (VGAE) emerged as powerful methods for link prediction.
It is still unclear to which extent one can improve community detection with GAE and VGAE.
We show that jointly addressing these two tasks with high accuracy is possible.
arXiv Detail & Related papers (2022-02-02T11:07:11Z) - Source Free Unsupervised Graph Domain Adaptation [60.901775859601685]
Unsupervised Graph Domain Adaptation (UGDA) shows its practical value of reducing the labeling cost for node classification.
Most existing UGDA methods heavily rely on the labeled graph in the source domain.
In some real-world scenarios, the source graph is inaccessible because of privacy issues.
We propose a novel scenario named Source Free Unsupervised Graph Domain Adaptation (SFUGDA)
arXiv Detail & Related papers (2021-12-02T03:18:18Z) - Deepened Graph Auto-Encoders Help Stabilize and Enhance Link Prediction [11.927046591097623]
Link prediction is a relatively under-studied graph learning task, with current state-of-the-art models based on one- or two-layers of shallow graph auto-encoder (GAE) architectures.
In this paper, we focus on addressing a limitation of current methods for link prediction, which can only use shallow GAEs and variational GAEs.
Our proposed methods innovatively incorporate standard auto-encoders (AEs) into the architectures of GAEs, where standard AEs are leveraged to learn essential, low-dimensional representations via seamlessly integrating the adjacency information and node features
arXiv Detail & Related papers (2021-03-21T14:43:10Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z) - Self-Constructing Graph Convolutional Networks for Semantic Labeling [23.623276007011373]
We propose a novel architecture called the Self-Constructing Graph (SCG), which makes use of learnable latent variables to generate embeddings.
SCG can automatically obtain optimized non-local context graphs from complex-shaped objects in aerial imagery.
We demonstrate the effectiveness and flexibility of the proposed SCG on the publicly available ISPRS Vaihingen dataset.
arXiv Detail & Related papers (2020-03-15T21:55:24Z) - FastGAE: Scalable Graph Autoencoders with Stochastic Subgraph Decoding [22.114681053198453]
Graph autoencoders (AE) and variational autoencoders (VAE) are powerful node embedding methods, but suffer from scalability issues.
FastGAE is a general framework to scale graph AE and VAE to large graphs with millions of nodes and edges.
arXiv Detail & Related papers (2020-02-05T18:27:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.