Modularity-Aware Graph Autoencoders for Joint Community Detection and
Link Prediction
- URL: http://arxiv.org/abs/2202.00961v1
- Date: Wed, 2 Feb 2022 11:07:11 GMT
- Title: Modularity-Aware Graph Autoencoders for Joint Community Detection and
Link Prediction
- Authors: Guillaume Salha-Galvan and Johannes F. Lutzeyer and George Dasoulas
and Romain Hennequin and Michalis Vazirgiannis
- Abstract summary: Graph autoencoders (GAE) and variational graph autoencoders (VGAE) emerged as powerful methods for link prediction.
It is still unclear to which extent one can improve community detection with GAE and VGAE.
We show that jointly addressing these two tasks with high accuracy is possible.
- Score: 27.570978996576503
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph autoencoders (GAE) and variational graph autoencoders (VGAE) emerged as
powerful methods for link prediction. Their performances are less impressive on
community detection problems where, according to recent and concurring
experimental evaluations, they are often outperformed by simpler alternatives
such as the Louvain method. It is currently still unclear to which extent one
can improve community detection with GAE and VGAE, especially in the absence of
node features. It is moreover uncertain whether one could do so while
simultaneously preserving good performances on link prediction. In this paper,
we show that jointly addressing these two tasks with high accuracy is possible.
For this purpose, we introduce and theoretically study a community-preserving
message passing scheme, doping our GAE and VGAE encoders by considering both
the initial graph structure and modularity-based prior communities when
computing embedding spaces. We also propose novel training and optimization
strategies, including the introduction of a modularity-inspired regularizer
complementing the existing reconstruction losses for joint link prediction and
community detection. We demonstrate the empirical effectiveness of our
approach, referred to as Modularity-Aware GAE and VGAE, through in-depth
experimental validation on various real-world graphs.
Related papers
- Reconsidering the Performance of GAE in Link Prediction [27.038895601935195]
We investigate the potential of Graph Autoencoders (GAE)
Our findings reveal that a well-optimized GAE can match the performance of more complex models while offering greater computational efficiency.
arXiv Detail & Related papers (2024-11-06T11:29:47Z) - Revisiting, Benchmarking and Understanding Unsupervised Graph Domain Adaptation [31.106636947179005]
Unsupervised Graph Domain Adaptation involves the transfer of knowledge from a label-rich source graph to an unlabeled target graph.
We present the first comprehensive benchmark for unsupervised graph domain adaptation named GDABench.
We observe that the performance of current UGDA models varies significantly across different datasets and adaptation scenarios.
arXiv Detail & Related papers (2024-07-09T06:44:09Z) - PREM: A Simple Yet Effective Approach for Node-Level Graph Anomaly
Detection [65.24854366973794]
Node-level graph anomaly detection (GAD) plays a critical role in identifying anomalous nodes from graph-structured data in domains such as medicine, social networks, and e-commerce.
We introduce a simple method termed PREprocessing and Matching (PREM for short) to improve the efficiency of GAD.
Our approach streamlines GAD, reducing time and memory consumption while maintaining powerful anomaly detection capabilities.
arXiv Detail & Related papers (2023-10-18T02:59:57Z) - BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - New Frontiers in Graph Autoencoders: Joint Community Detection and Link
Prediction [27.570978996576503]
Graph autoencoders (GAE) and variational graph autoencoders (VGAE) emerged as powerful methods for link prediction (LP)
It is still unclear to what extent one can improve CD with GAE and VGAE, especially in the absence of node features.
We show that jointly addressing these two tasks with high accuracy is possible.
arXiv Detail & Related papers (2022-11-16T15:26:56Z) - Contributions to Representation Learning with Graph Autoencoders and
Applications to Music Recommendation [1.2691047660244335]
Graph autoencoders (GAE) and variational graph autoencoders (VGAE) emerged as powerful groups of unsupervised node embedding methods.
At the beginning of this Ph.D. project, GAE and VGAE models were also suffering from key limitations, preventing them from being adopted in the industry.
We present several contributions to improve these models, with the general aim of facilitating their use to address industrial-level problems involving graph representations.
arXiv Detail & Related papers (2022-05-29T13:14:53Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased
Scene Graph Generation [62.96628432641806]
Scene Graph Generation aims to first encode the visual contents within the given image and then parse them into a compact summary graph.
We first present a novel Stacked Hybrid-Attention network, which facilitates the intra-modal refinement as well as the inter-modal interaction.
We then devise an innovative Group Collaborative Learning strategy to optimize the decoder.
arXiv Detail & Related papers (2022-03-18T09:14:13Z) - Deepened Graph Auto-Encoders Help Stabilize and Enhance Link Prediction [11.927046591097623]
Link prediction is a relatively under-studied graph learning task, with current state-of-the-art models based on one- or two-layers of shallow graph auto-encoder (GAE) architectures.
In this paper, we focus on addressing a limitation of current methods for link prediction, which can only use shallow GAEs and variational GAEs.
Our proposed methods innovatively incorporate standard auto-encoders (AEs) into the architectures of GAEs, where standard AEs are leveraged to learn essential, low-dimensional representations via seamlessly integrating the adjacency information and node features
arXiv Detail & Related papers (2021-03-21T14:43:10Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.