A Diffusive Data Augmentation Framework for Reconstruction of Complex Network Evolutionary History
- URL: http://arxiv.org/abs/2501.06485v1
- Date: Sat, 11 Jan 2025 08:39:33 GMT
- Title: A Diffusive Data Augmentation Framework for Reconstruction of Complex Network Evolutionary History
- Authors: En Xu, Can Rong, Jingtao Ding, Yong Li,
- Abstract summary: The generation time of edges provides insights into the historical evolution of various networked complex systems.
Existing methods are capable of predicting the generation times of remaining edges given a partial temporal network but often perform poorly in cross-network prediction tasks.
In this work, we adopt a comparative paradigm-based framework that fuses multiple networks for training, enabling cross-network learning of the relationship between network structure and edge generation times.
- Score: 8.545760548231584
- License:
- Abstract: The evolutionary processes of complex systems contain critical information regarding their functional characteristics. The generation time of edges provides insights into the historical evolution of various networked complex systems, such as protein-protein interaction networks, ecosystems, and social networks. Recovering these evolutionary processes holds significant scientific value, including aiding in the interpretation of the evolution of protein-protein interaction networks. However, existing methods are capable of predicting the generation times of remaining edges given a partial temporal network but often perform poorly in cross-network prediction tasks. These methods frequently fail in edge generation time recovery tasks for static networks that lack timestamps. In this work, we adopt a comparative paradigm-based framework that fuses multiple networks for training, enabling cross-network learning of the relationship between network structure and edge generation times. Compared to separate training, this approach yields an average accuracy improvement of 16.98%. Furthermore, given the difficulty in collecting temporal networks, we propose a novel diffusion-model-based generation method to produce a large number of temporal networks. By combining real temporal networks with generated ones for training, we achieve an additional average accuracy improvement of 5.46% through joint training.
Related papers
- Contrastive Representation Learning for Dynamic Link Prediction in Temporal Networks [1.9389881806157312]
We introduce a self-supervised method for learning representations of temporal networks.
We propose a recurrent message-passing neural network architecture for modeling the information flow over time-respecting paths of temporal networks.
The proposed method is tested on Enron, COLAB, and Facebook datasets.
arXiv Detail & Related papers (2024-08-22T22:50:46Z) - Symbolic Regression of Dynamic Network Models [0.0]
We introduce a novel formulation of a network generator and a parameter-free fitness function to evaluate the generated network.
We extend this approach by modifying generator semantics to create and retrieve rules for time-varying networks.
The framework was then used on three empirical datasets - subway networks of major cities, regions of street networks and semantic co-occurrence networks of literature in Artificial Intelligence.
arXiv Detail & Related papers (2023-12-15T00:34:45Z) - Critical Learning Periods for Multisensory Integration in Deep Networks [112.40005682521638]
We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training.
We show that critical periods arise from the complex and unstable early transient dynamics, which are decisive of final performance of the trained system and their learned representations.
arXiv Detail & Related papers (2022-10-06T23:50:38Z) - Generating fine-grained surrogate temporal networks [12.7211231166069]
We propose a novel and simple method for generating surrogate temporal networks.
Our method decomposes the input network into star-like structures evolving in time.
Then those structures are used as building blocks to generate a surrogate temporal network.
arXiv Detail & Related papers (2022-05-18T09:38:22Z) - Learning Fast and Slow for Online Time Series Forecasting [76.50127663309604]
Fast and Slow learning Networks (FSNet) is a holistic framework for online time-series forecasting.
FSNet balances fast adaptation to recent changes and retrieving similar old knowledge.
Our code will be made publicly available.
arXiv Detail & Related papers (2022-02-23T18:23:07Z) - Temporal Network Embedding via Tensor Factorization [13.490625417640658]
The embeddings of temporal networks should encode both graph-structured information and the temporally evolving pattern.
Existing approaches in learning temporally evolving network representations fail to capture the temporal interdependence.
We propose Toffee, a novel approach for temporal network representation learning based on tensor decomposition.
arXiv Detail & Related papers (2021-08-22T20:50:38Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - EPNE: Evolutionary Pattern Preserving Network Embedding [26.06068388979255]
We propose EPNE, a temporal network embedding model preserving evolutionary patterns of the local structure of nodes.
With the adequate modeling of temporal information, our model is able to outperform other competitive methods in various prediction tasks.
arXiv Detail & Related papers (2020-09-24T06:31:14Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.