A Hybrid Membership Latent Distance Model for Unsigned and Signed
Integer Weighted Networks
- URL: http://arxiv.org/abs/2308.15293v1
- Date: Tue, 29 Aug 2023 13:30:48 GMT
- Title: A Hybrid Membership Latent Distance Model for Unsigned and Signed
Integer Weighted Networks
- Authors: Nikolaos Nakis, Abdulkadir \c{C}elikkanat, Morten M{\o}rup
- Abstract summary: Graph representation learning (GRL) has become a prominent tool for furthering the understanding of complex networks.
We propose the Hybrid Membership-Latent Distance Model (HM-LDM) by exploring how a Latent Distance Model (LDM) can be constrained to a latent simplex.
- Score: 0.17265013728931003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph representation learning (GRL) has become a prominent tool for
furthering the understanding of complex networks providing tools for network
embedding, link prediction, and node classification. In this paper, we propose
the Hybrid Membership-Latent Distance Model (HM-LDM) by exploring how a Latent
Distance Model (LDM) can be constrained to a latent simplex. By controlling the
edge lengths of the corners of the simplex, the volume of the latent space can
be systematically controlled. Thereby communities are revealed as the space
becomes more constrained, with hard memberships being recovered as the simplex
volume goes to zero. We further explore a recent likelihood formulation for
signed networks utilizing the Skellam distribution to account for signed
weighted networks and extend the HM-LDM to the signed Hybrid Membership-Latent
Distance Model (sHM-LDM). Importantly, the induced likelihood function
explicitly attracts nodes with positive links and deters nodes from having
negative interactions. We demonstrate the utility of HM-LDM and sHM-LDM on
several real networks. We find that the procedures successfully identify
prominent distinct structures, as well as how nodes relate to the extracted
aspects providing favorable performances in terms of link prediction when
compared to prominent baselines. Furthermore, the learned soft memberships
enable easily interpretable network visualizations highlighting distinct
patterns.
Related papers
- MFPNet: Multi-scale Feature Propagation Network For Lightweight Semantic
Segmentation [5.58363644107113]
We propose a novel lightweight segmentation architecture, called Multi-scale Feature Propagation Network (Net)
We design a robust-Decoder structure featuring symmetrical residual blocks that consist of flexible bottleneck residual modules (BRMs)
Taking benefit of their capacity to model latent long-range contextual relationships, we leverage Graph Convolutional Networks (GCNs) to facilitate multiscale feature propagation between the BRM blocks.
arXiv Detail & Related papers (2023-09-10T02:02:29Z) - Self-Supervised Node Representation Learning via Node-to-Neighbourhood
Alignment [10.879056662671802]
Self-supervised node representation learning aims to learn node representations from unlabelled graphs that rival the supervised counterparts.
In this work, we present simple-yet-effective self-supervised node representation learning via aligning the hidden representations of nodes and their neighbourhood.
We learn node representations that achieve promising node classification performance on a set of graph-structured datasets from small- to large-scale.
arXiv Detail & Related papers (2023-02-09T13:21:18Z) - Neighborhood Convolutional Network: A New Paradigm of Graph Neural
Networks for Node Classification [12.062421384484812]
Graph Convolutional Network (GCN) decouples neighborhood aggregation and feature transformation in each convolutional layer.
In this paper, we propose a new paradigm of GCN, termed Neighborhood Convolutional Network (NCN)
In this way, the model could inherit the merit of decoupled GCN for aggregating neighborhood information, at the same time, develop much more powerful feature learning modules.
arXiv Detail & Related papers (2022-11-15T02:02:51Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - A Hierarchical Block Distance Model for Ultra Low-Dimensional Graph
Representations [0.0]
This paper proposes a novel scalable graph representation learning method named the Block Distance Model (HBDM)
HBDM accounts for homophily and transitivity by accurately approximating the latent distance model (LDM) throughout the hierarchy.
We evaluate the performance of the HBDM on massive networks consisting of millions of nodes.
arXiv Detail & Related papers (2022-04-12T15:23:12Z) - DeHIN: A Decentralized Framework for Embedding Large-scale Heterogeneous
Information Networks [64.62314068155997]
We present textitDecentralized Embedding Framework for Heterogeneous Information Network (DeHIN) in this paper.
DeHIN presents a context preserving partition mechanism that innovatively formulates a large HIN as a hypergraph.
Our framework then adopts a decentralized strategy to efficiently partition HINs by adopting a tree-like pipeline.
arXiv Detail & Related papers (2022-01-08T04:08:36Z) - BScNets: Block Simplicial Complex Neural Networks [79.81654213581977]
Simplicial neural networks (SNN) have recently emerged as the newest direction in graph learning.
We present Block Simplicial Complex Neural Networks (BScNets) model for link prediction.
BScNets outperforms state-of-the-art models by a significant margin while maintaining low costs.
arXiv Detail & Related papers (2021-12-13T17:35:54Z) - Semi-supervised Network Embedding with Differentiable Deep Quantisation [81.49184987430333]
We develop d-SNEQ, a differentiable quantisation method for network embedding.
d-SNEQ incorporates a rank loss to equip the learned quantisation codes with rich high-order information.
It is able to substantially compress the size of trained embeddings, thus reducing storage footprint and accelerating retrieval speed.
arXiv Detail & Related papers (2021-08-20T11:53:05Z) - MUSE: Multi-faceted Attention for Signed Network Embedding [4.442695760653947]
Signed network embedding is an approach to learn low-dimensional representations of nodes in signed networks with both positive and negative links.
We propose MUSE, a MUlti-faceted attention-based Signed network Embedding framework to tackle this problem.
arXiv Detail & Related papers (2021-04-29T16:09:35Z) - Interpretable Signed Link Prediction with Signed Infomax Hyperbolic
Graph [54.03786611989613]
signed link prediction in social networks aims to reveal the underlying relationships (i.e. links) among users (i.e. nodes)
We develop a unified framework, termed as Signed Infomax Hyperbolic Graph (textbfSIHG)
In order to model high-order user relations and complex hierarchies, the node embeddings are projected and measured in a hyperbolic space with a lower distortion.
arXiv Detail & Related papers (2020-11-25T05:09:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.