Time-invariant degree growth in preferential attachment network models
- URL: http://arxiv.org/abs/2001.08132v1
- Date: Wed, 22 Jan 2020 16:31:11 GMT
- Title: Time-invariant degree growth in preferential attachment network models
- Authors: Jun Sun, Mat\'u\v{s} Medo, Steffen Staab
- Abstract summary: We study the degree dynamics in a class of network models where preferential attachment is combined with node fitness and aging.
We show that it is self-consistent only for two special network growth forms: the uniform and exponential network growth.
- Score: 8.929656934088989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Preferential attachment drives the evolution of many complex networks. Its
analytical studies mostly consider the simplest case of a network that grows
uniformly in time despite the accelerating growth of many real networks.
Motivated by the observation that the average degree growth of nodes is
time-invariant in empirical network data, we study the degree dynamics in the
relevant class of network models where preferential attachment is combined with
heterogeneous node fitness and aging. We propose a novel analytical framework
based on the time-invariance of the studied systems and show that it is
self-consistent only for two special network growth forms: the uniform and
exponential network growth. Conversely, the breaking of such time-invariance
explains the winner-takes-all effect in some model settings, revealing the
connection between the Bose-Einstein condensation in the Bianconi-Barab\'{a}si
model and similar gelation in superlinear preferential attachment. Aging is
necessary to reproduce realistic node degree growth curves and can prevent the
winner-takes-all effect under weak conditions. Our results are verified by
extensive numerical simulations.
Related papers
- Multi-Agent Q-Learning Dynamics in Random Networks: Convergence due to Exploration and Sparsity [5.925608009772727]
We study Q-learning dynamics in network polymatrix games where the network structure is drawn from random graph models.
In each setting, we establish sufficient conditions under which the agents' joint strategies converge to a unique equilibrium.
We validate our theoretical findings through numerical simulations and demonstrate that convergence can be reliably achieved in many-agent systems.
arXiv Detail & Related papers (2025-03-13T09:16:51Z) - Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Learning the mechanisms of network growth [42.1340910148224]
We propose a novel model-selection method for dynamic networks.
Data is generated by simulating nine state-of-the-art random graph models.
Proposed features are easy to compute, analytically tractable, and interpretable.
arXiv Detail & Related papers (2024-03-31T20:38:59Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - From Cubes to Networks: Fast Generic Model for Synthetic Networks
Generation [15.070865479516696]
We propose FGM, a fast generic model converting cubes into interrelated networks.
We show that FGM can cost-efficiently generate networks exhibiting typical patterns more closely aligned to factual networks.
Results show that FGM is resilient to input perturbations, producing networks with consistent fine properties.
arXiv Detail & Related papers (2022-11-05T04:24:20Z) - DAMNETS: A Deep Autoregressive Model for Generating Markovian Network
Time Series [6.834250594353335]
Generative models for network time series (also known as dynamic graphs) have tremendous potential in fields such as epidemiology, biology and economics.
Here we introduce DAMNETS, a scalable deep generative model for network time series.
arXiv Detail & Related papers (2022-03-28T18:14:04Z) - SimHawNet: A Modified Hawkes Process for Temporal Network Simulation [12.403827785443928]
We propose a new framework for generative models of continuous-time temporal networks.
SimHawNet enables simulation of the evolution of temporal networks in continuous time.
arXiv Detail & Related papers (2022-03-14T16:40:57Z) - Multi-scale Feature Learning Dynamics: Insights for Double Descent [71.91871020059857]
We study the phenomenon of "double descent" of the generalization error.
We find that double descent can be attributed to distinct features being learned at different scales.
arXiv Detail & Related papers (2021-12-06T18:17:08Z) - Stochastic Recurrent Neural Network for Multistep Time Series
Forecasting [0.0]
We leverage advances in deep generative models and the concept of state space models to propose an adaptation of the recurrent neural network for time series forecasting.
Our model preserves the architectural workings of a recurrent neural network for which all relevant information is encapsulated in its hidden states, and this flexibility allows our model to be easily integrated into any deep architecture for sequential modelling.
arXiv Detail & Related papers (2021-04-26T01:43:43Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.