Fast inference of latent space dynamics in huge relational event
networks
- URL: http://arxiv.org/abs/2303.17460v1
- Date: Wed, 29 Mar 2023 15:18:56 GMT
- Title: Fast inference of latent space dynamics in huge relational event
networks
- Authors: Igor Artico and Ernst Wit
- Abstract summary: We propose a likelihood-based algorithm that can deal with huge event networks.
In this work we propose a hierarchical strategy for inferring network community dynamics embedded into an interpretable latent space.
To make the framework feasible for large networks we borrow from machine learning optimization methodology.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Relational events are a type of social interactions, that sometimes are
referred to as dynamic networks. Its dynamics typically depends on emerging
patterns, so-called endogenous variables, or external forces, referred to as
exogenous variables. Comprehensive information on the actors in the network,
especially for huge networks, is rare, however. A latent space approach in
network analysis has been a popular way to account for unmeasured covariates
that are driving network configurations. Bayesian and EM-type algorithms have
been proposed for inferring the latent space, but both the sheer size many
social network applications as well as the dynamic nature of the process, and
therefore the latent space, make computations prohibitively expensive. In this
work we propose a likelihood-based algorithm that can deal with huge relational
event networks. We propose a hierarchical strategy for inferring network
community dynamics embedded into an interpretable latent space. Node dynamics
are described by smooth spline processes. To make the framework feasible for
large networks we borrow from machine learning optimization methodology.
Model-based clustering is carried out via a convex clustering penalization,
encouraging shared trajectories for ease of interpretation. We propose a
model-based approach for separating macro-microstructures and perform a
hierarchical analysis within successive hierarchies. The method can fit
millions of nodes on a public Colab GPU in a few minutes. The code and a
tutorial are available in a Github repository.
Related papers
- Discovering Message Passing Hierarchies for Mesh-Based Physics Simulation [61.89682310797067]
We introduce DHMP, which learns Dynamic Hierarchies for Message Passing networks through a differentiable node selection method.
Our experiments demonstrate the effectiveness of DHMP, achieving 22.7% improvement on average compared to recent fixed-hierarchy message passing networks.
arXiv Detail & Related papers (2024-10-03T15:18:00Z) - Bayesian Detection of Mesoscale Structures in Pathway Data on Graphs [0.0]
mesoscale structures are integral part of the abstraction and analysis of complex systems.
They can represent communities in social or citation networks, roles in corporate interactions, or core-periphery structures in transportation networks.
We derive a Bayesian approach that simultaneously models the optimal partitioning of nodes in groups and the optimal higher-order network dynamics.
arXiv Detail & Related papers (2023-01-16T12:45:33Z) - On Optimizing the Communication of Model Parallelism [74.15423270435949]
We study a novel and important communication pattern in large-scale model-parallel deep learning (DL)
In cross-mesh resharding, a sharded tensor needs to be sent from a source device mesh to a destination device mesh.
We propose two contributions to address cross-mesh resharding: an efficient broadcast-based communication system, and an "overlapping-friendly" pipeline schedule.
arXiv Detail & Related papers (2022-11-10T03:56:48Z) - Learning with latent group sparsity via heat flow dynamics on networks [5.076419064097734]
Group or cluster structure on explanatory variables in machine learning problems is a very general phenomenon.
We contribute an approach to learning under such group structure, that does not require prior information on the group identities.
We demonstrate a procedure to construct such a network based on the available data.
arXiv Detail & Related papers (2022-01-20T17:45:57Z) - Network Clustering for Latent State and Changepoint Detection [0.0]
We propose a convex approach for the task of network clustering.
We provide an efficient algorithm for convex network clustering and demonstrate its effectiveness on synthetic examples.
arXiv Detail & Related papers (2021-11-01T21:51:45Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Online Estimation and Community Detection of Network Point Processes for
Event Streams [12.211623200731788]
A common goal in network modeling is to uncover the latent community structure present among nodes.
We propose a fast online variational inference algorithm for estimating the latent structure underlying dynamic event arrivals on a network.
We demonstrate that online inference can obtain comparable performance, in terms of community recovery, to non-online variants.
arXiv Detail & Related papers (2020-09-03T15:39:55Z) - A Multi-Semantic Metapath Model for Large Scale Heterogeneous Network
Representation Learning [52.83948119677194]
We propose a multi-semantic metapath (MSM) model for large scale heterogeneous representation learning.
Specifically, we generate multi-semantic metapath-based random walks to construct the heterogeneous neighborhood to handle the unbalanced distributions.
We conduct systematical evaluations for the proposed framework on two challenging datasets: Amazon and Alibaba.
arXiv Detail & Related papers (2020-07-19T22:50:20Z) - Modeling Dynamic Heterogeneous Network for Link Prediction using
Hierarchical Attention with Temporal RNN [16.362525151483084]
We propose a novel dynamic heterogeneous network embedding method, termed as DyHATR.
It uses hierarchical attention to learn heterogeneous information and incorporates recurrent neural networks with temporal attention to capture evolutionary patterns.
We benchmark our method on four real-world datasets for the task of link prediction.
arXiv Detail & Related papers (2020-04-01T17:16:47Z) - Learning Dynamic Routing for Semantic Segmentation [86.56049245100084]
This paper studies a conceptually new method to alleviate the scale variance in semantic representation, named dynamic routing.
The proposed framework generates data-dependent routes, adapting to the scale distribution of each image.
To this end, a differentiable gating function, called soft conditional gate, is proposed to select scale transform paths on the fly.
arXiv Detail & Related papers (2020-03-23T17:22:14Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.