Power law dynamics in genealogical graphs
- URL: http://arxiv.org/abs/2010.05463v3
- Date: Fri, 4 Mar 2022 23:10:16 GMT
- Title: Power law dynamics in genealogical graphs
- Authors: Francisco Leonardo Bezerra Martins, Jos\'e Cl\'audio do Nascimento
- Abstract summary: We use an algorithm to measure the impact of individuals in several numerical populations and study its dynamics of evolution.
We show evidence that the observed emergence of power law has a dynamic behavior over time.
We also show evidence that elitism significantly influences the power law scaling factors observed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Several populational networks present complex topologies when implemented in
evolutionary algorithms. A common feature of these topologies is the emergence
of a power law. Power law behavior with different scaling factors can also be
observed in genealogical networks, but we still can not satisfactorily describe
its dynamics or its relation to population evolution over time. In this paper,
we use an algorithm to measure the impact of individuals in several numerical
populations and study its dynamics of evolution through nonextensive
statistics. Like this, we show evidence that the observed emergence of power
law has a dynamic behavior over time. This dynamic development can be described
using a family of q-exponential distributions whose parameters are
time-dependent and follow a specific pattern. We also show evidence that
elitism significantly influences the power law scaling factors observed. These
results imply that the different power law shapes and deviations observed in
genealogical networks are static images of a time-dependent dynamic development
that can be satisfactorily described using q-exponential distributions.
Related papers
- TANGO: Graph Neural Dynamics via Learned Energy and Tangential Flows [17.546965223021786]
We introduce TANGO -- a dynamical systems inspired framework for graph representation learning.<n>At the core of our approach is a learnable Lyapunov function over node embeddings.<n>We incorporate a novel tangential component, learned via message passing, that evolves features while maintaining the energy value.
arXiv Detail & Related papers (2025-08-07T06:44:01Z) - Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.
In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Dynamic Causal Structure Discovery and Causal Effect Estimation [5.943525863330208]
We develop a new framework to model the dynamic causal graph where the causal relations are allowed to be time-varying.
We propose an algorithm that could provide both past-time estimates and future-time predictions on the causal graphs.
arXiv Detail & Related papers (2025-01-11T12:52:39Z) - Analyzing Neural Scaling Laws in Two-Layer Networks with Power-Law Data Spectra [0.0]
Neural scaling laws describe how the performance of deep neural networks scales with key factors such as training data size, model complexity, and training time.
We employ techniques from statistical mechanics to analyze one-pass gradient descent within a student-teacher framework.
arXiv Detail & Related papers (2024-10-11T17:21:42Z) - Nested replicator dynamics, nested logit choice, and similarity-based learning [56.98352103321524]
We consider a model of learning and evolution in games with action sets endowed with a partition-based similarity structure.
In this model, revising agents have a higher probability of comparing their current strategy with other strategies that they deem similar.
Because of this implicit bias toward similar strategies, the resulting dynamics do not satisfy any of the standard monotonicity rationalitys for imitative game dynamics.
arXiv Detail & Related papers (2024-07-25T07:09:53Z) - Out-of-Distribution Generalized Dynamic Graph Neural Network with
Disentangled Intervention and Invariance Promotion [61.751257172868186]
Dynamic graph neural networks (DyGNNs) have demonstrated powerful predictive abilities by exploiting graph and temporal dynamics.
Existing DyGNNs fail to handle distribution shifts, which naturally exist in dynamic graphs.
arXiv Detail & Related papers (2023-11-24T02:42:42Z) - SEGNO: Generalizing Equivariant Graph Neural Networks with Physical
Inductive Biases [66.61789780666727]
We show how the second-order continuity can be incorporated into GNNs while maintaining the equivariant property.
We also offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states.
Our model yields a significant improvement over the state-of-the-art baselines.
arXiv Detail & Related papers (2023-08-25T07:15:58Z) - Dynamic Causal Explanation Based Diffusion-Variational Graph Neural
Network for Spatio-temporal Forecasting [60.03169701753824]
We propose a novel Dynamic Diffusion-al Graph Neural Network (DVGNN) fortemporal forecasting.
The proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding Root Mean Squared Error result.
arXiv Detail & Related papers (2023-05-16T11:38:19Z) - Curvature-informed multi-task learning for graph networks [56.155331323304]
State-of-the-art graph neural networks attempt to predict multiple properties simultaneously.
We investigate a potential explanation for this phenomenon: the curvature of each property's loss surface significantly varies, leading to inefficient learning.
arXiv Detail & Related papers (2022-08-02T18:18:41Z) - Learning the Evolutionary and Multi-scale Graph Structure for
Multivariate Time Series Forecasting [50.901984244738806]
We show how to model the evolutionary and multi-scale interactions of time series.
In particular, we first provide a hierarchical graph structure cooperated with the dilated convolution to capture the scale-specific correlations.
A unified neural network is provided to integrate the components above to get the final prediction.
arXiv Detail & Related papers (2022-06-28T08:11:12Z) - Coupling Power Laws Offers a Powerful Method for Problems such as
Biodiversity and COVID-19 Fatality Predictions [0.0]
Taylor's power law (TPL) first discovered to characterize the spatial and/or temporal distribution of biological populations.
PLEC is a variant of power-law function that tapers off the exponential growth of power-law function ultimately.
We propose coupling (integration) of TPL and PLEC to offer improved prediction quality of certain power-law phenomena.
arXiv Detail & Related papers (2021-05-23T19:07:16Z) - A Survey on Embedding Dynamic Graphs [0.0]
We overview dynamic graph embedding, discussing its fundamentals and the recent advances developed so far.
We introduce the formal definition of dynamic graph embedding, focusing on the problem setting.
We explore different dynamic behaviors that may be encompassed by embeddings, classifying by topological evolution, feature evolution, and processes on networks.
arXiv Detail & Related papers (2021-01-04T20:35:26Z) - Distinct Critical Behaviors from the Same State in Quantum Spin and
Population Dynamics Perspectives [0.0]
We show that phase transitions which are discontinuous in the spin system become continuous when viewed through the population perspective.
We introduce a more general class of models which encompasses both cases, and that can be solved exactly in a mean-field limit.
Numerical results are also presented for a number of one-dimensional chains with power-law interactions.
arXiv Detail & Related papers (2020-09-10T18:01:19Z) - Momentum Accelerates Evolutionary Dynamics [4.061135251278187]
We show that momentum accelerates the convergence of evolutionary dynamics including the replicator equation and Euclidean gradient descent on populations.
We also show that momentum can alter the convergence properties of these dynamics, for example by breaking the cycling associated to the rock-paper-scissors landscape.
arXiv Detail & Related papers (2020-07-05T21:09:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.