Predicting the structure of dynamic graphs
- URL: http://arxiv.org/abs/2401.04280v2
- Date: Thu, 25 Jul 2024 01:31:45 GMT
- Title: Predicting the structure of dynamic graphs
- Authors: Sevvandi Kandanaarachchi, Ziqi Xu, Stefan Westerlund,
- Abstract summary: We forecast the structure of a graph at future time steps incorporating unseen, new nodes and edges.
We use time series forecasting methods to predict the node degree at future time points and combine these forecasts with flux balance analysis.
We evaluate this approach using synthetic and real-world datasets and demonstrate its utility and applicability.
- Score: 3.035039100561926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many aspects of graphs have been studied in depth. However, forecasting the structure of a graph at future time steps incorporating unseen, new nodes and edges has not gained much attention. In this paper, we present such an approach. Using a time series of graphs, we forecast graphs at future time steps. We use time series forecasting methods to predict the node degree at future time points and combine these forecasts with flux balance analysis -- a linear programming method used in biochemistry -- to obtain the structure of future graphs. We evaluate this approach using synthetic and real-world datasets and demonstrate its utility and applicability.
Related papers
- Parametric Graph Representations in the Era of Foundation Models: A Survey and Position [69.48708136448694]
Graphs have been widely used in the past decades of big data and AI to model comprehensive relational data.
Identifying meaningful graph laws can significantly enhance the effectiveness of various applications.
arXiv Detail & Related papers (2024-10-16T00:01:31Z) - Benchmarking Graph Conformal Prediction: Empirical Analysis, Scalability, and Theoretical Insights [6.801587574420671]
Conformal prediction has become increasingly popular for quantifying the uncertainty associated with machine learning models.
Recent work in graph uncertainty quantification has built upon this approach for conformal graph prediction.
We analyze the design choices made in the literature and discuss the tradeoffs associated with existing methods.
arXiv Detail & Related papers (2024-09-26T23:13:51Z) - Sparsity exploitation via discovering graphical models in multi-variate
time-series forecasting [1.2762298148425795]
We propose a decoupled training method, which includes a graph generating module and a GNNs forecasting module.
First, we use Graphical Lasso (or GraphLASSO) to directly exploit the sparsity pattern from data to build graph structures.
Second, we fit these graph structures and the input data into a Graph Convolutional Recurrent Network (GCRN) to train a forecasting model.
arXiv Detail & Related papers (2023-06-29T16:48:00Z) - Time-aware Graph Structure Learning via Sequence Prediction on Temporal
Graphs [10.034072706245544]
We propose a Time-aware Graph Structure Learning (TGSL) approach via sequence prediction on temporal graphs.
In particular, it predicts time-aware context embedding and uses the Gumble-Top-K to select the closest candidate edges to this context embedding.
Experiments on temporal link prediction benchmarks demonstrate that TGSL yields significant gains for the popular TGNs such as TGAT and GraphMixer.
arXiv Detail & Related papers (2023-06-13T11:34:36Z) - Graph-Level Embedding for Time-Evolving Graphs [24.194795771873046]
Graph representation learning (also known as network embedding) has been extensively researched with varying levels of granularity.
We present a novel method for temporal graph-level embedding that addresses this gap.
arXiv Detail & Related papers (2023-06-01T01:50:37Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Graph Condensation via Receptive Field Distribution Matching [61.71711656856704]
This paper focuses on creating a small graph to represent the original graph, so that GNNs trained on the size-reduced graph can make accurate predictions.
We view the original graph as a distribution of receptive fields and aim to synthesize a small graph whose receptive fields share a similar distribution.
arXiv Detail & Related papers (2022-06-28T02:10:05Z) - Sparse Graph Learning from Spatiotemporal Time Series [16.427698929775023]
We propose a graph learning framework that learns the relational dependencies as distributions over graphs.
We show that the proposed solution can be used as a stand-alone graph identification procedure as well as a graph learning component of an end-to-end forecasting architecture.
arXiv Detail & Related papers (2022-05-26T17:02:43Z) - Graph Pooling for Graph Neural Networks: Progress, Challenges, and
Opportunities [128.55790219377315]
Graph neural networks have emerged as a leading architecture for many graph-level tasks.
graph pooling is indispensable for obtaining a holistic graph-level representation of the whole graph.
arXiv Detail & Related papers (2022-04-15T04:02:06Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.