Predicting Influential Higher-Order Patterns in Temporal Network Data
- URL: http://arxiv.org/abs/2107.12100v1
- Date: Mon, 26 Jul 2021 10:44:46 GMT
- Title: Predicting Influential Higher-Order Patterns in Temporal Network Data
- Authors: Christoph Gote and Vincenzo Perri and Ingo Scholtes
- Abstract summary: We propose eight centrality measures based on MOGen, a multi-order generative model that accounts for all paths up to a maximum distance but disregards paths at higher distances.
We show that MOGen consistently outperforms both the network model and path-based prediction.
- Score: 2.5782420501870287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Networks are frequently used to model complex systems comprised of
interacting elements. While links capture the topology of direct interactions,
the true complexity of many systems originates from higher-order patterns in
paths by which nodes can indirectly influence each other. Path data,
representing ordered sequences of consecutive direct interactions, can be used
to model these patterns. However, to avoid overfitting, such models should only
consider those higher-order patterns for which the data provide sufficient
statistical evidence. On the other hand, we hypothesise that network models,
which capture only direct interactions, underfit higher-order patterns present
in data. Consequently, both approaches are likely to misidentify influential
nodes in complex networks. We contribute to this issue by proposing eight
centrality measures based on MOGen, a multi-order generative model that
accounts for all paths up to a maximum distance but disregards paths at higher
distances. We compare MOGen-based centralities to equivalent measures for
network models and path data in a prediction experiment where we aim to
identify influential nodes in out-of-sample data. Our results show strong
evidence supporting our hypothesis. MOGen consistently outperforms both the
network model and path-based prediction. We further show that the performance
difference between MOGen and the path-based approach disappears if we have
sufficient observations, confirming that the error is due to overfitting.
Related papers
- SPHINX: Structural Prediction using Hypergraph Inference Network [19.853413818941608]
We introduce Structural Prediction using Hypergraph Inference Network (SPHINX), a model that learns to infer a latent hypergraph structure in an unsupervised way.
We show that the recent advancement in k-subset sampling represents a suitable tool for producing discrete hypergraph structures.
The resulting model can generate the higher-order structure necessary for any modern hypergraph neural network.
arXiv Detail & Related papers (2024-10-04T07:49:57Z) - Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - Hierarchical Joint Graph Learning and Multivariate Time Series
Forecasting [0.16492989697868887]
We introduce a method of representing multivariate signals as nodes in a graph with edges indicating interdependency between them.
We leverage graph neural networks (GNN) and attention mechanisms to efficiently learn the underlying relationships within the time series data.
The effectiveness of our proposed model is evaluated across various real-world benchmark datasets designed for long-term forecasting tasks.
arXiv Detail & Related papers (2023-11-21T14:24:21Z) - Inferring effective couplings with Restricted Boltzmann Machines [3.150368120416908]
Generative models attempt to encode correlations observed in the data at the level of the Boltzmann weight associated with an energy function in the form of a neural network.
We propose a solution by implementing a direct mapping between the Restricted Boltzmann Machine and an effective Ising spin Hamiltonian.
arXiv Detail & Related papers (2023-09-05T14:55:09Z) - Neural Temporal Point Process for Forecasting Higher Order and Directional Interactions [10.803714426078642]
We propose a deep neural network-based model textitDirected HyperNode Temporal Point Process for directed hyperedge event forecasting.
Our proposed technique reduces the search space by initially forecasting the nodes at which events will be observed.
Based on these, it generates candidate hyperedges, which are then used by a hyperedge predictor to identify the ground truth.
arXiv Detail & Related papers (2023-01-28T14:32:14Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - How Well Do Sparse Imagenet Models Transfer? [75.98123173154605]
Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" datasets.
In this work, we perform an in-depth investigation of this phenomenon in the context of convolutional neural networks (CNNs) trained on the ImageNet dataset.
We show that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities.
arXiv Detail & Related papers (2021-11-26T11:58:51Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - A Multi-Semantic Metapath Model for Large Scale Heterogeneous Network
Representation Learning [52.83948119677194]
We propose a multi-semantic metapath (MSM) model for large scale heterogeneous representation learning.
Specifically, we generate multi-semantic metapath-based random walks to construct the heterogeneous neighborhood to handle the unbalanced distributions.
We conduct systematical evaluations for the proposed framework on two challenging datasets: Amazon and Alibaba.
arXiv Detail & Related papers (2020-07-19T22:50:20Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - PushNet: Efficient and Adaptive Neural Message Passing [1.9121961872220468]
Message passing neural networks have recently evolved into a state-of-the-art approach to representation learning on graphs.
Existing methods perform synchronous message passing along all edges in multiple subsequent rounds.
We consider a novel asynchronous message passing approach where information is pushed only along the most relevant edges until convergence.
arXiv Detail & Related papers (2020-03-04T18:15:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.