Sheaf-Theoretic Causal Emergence for Resilience Analysis in Distributed Systems
- URL: http://arxiv.org/abs/2503.14104v1
- Date: Tue, 18 Mar 2025 10:19:33 GMT
- Title: Sheaf-Theoretic Causal Emergence for Resilience Analysis in Distributed Systems
- Authors: Anatoly A. Krasnovsky,
- Abstract summary: Distributed systems often exhibit emergent behaviors that impact their resilience.<n>This paper presents a theoretical framework combining graph models, flow-on-graph simulation, and causal emergence analysis to evaluate system resilience.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed systems often exhibit emergent behaviors that impact their resilience (Franz-Kaiser et al., 2020; Adilson E. Motter, 2002; Jianxi Gao, 2016). This paper presents a theoretical framework combining attributed graph models, flow-on-graph simulation, and sheaf-theoretic causal emergence analysis to evaluate system resilience. We model a distributed system as a graph with attributes (capturing component state and connections) and use sheaf theory to formalize how local interactions compose into global states. A flow simulation on this graph propagates functional loads and failures. To assess resilience, we apply the concept of causal emergence, quantifying whether macro-level dynamics (coarse-grained groupings) exhibit stronger causal efficacy (via effective information) than micro-level dynamics. The novelty lies in uniting sheaf-based formalization with causal metrics to identify emergent resilient structures. We discuss limitless potential applications (illustrated by microservices, neural networks, and power grids) and outline future steps toward implementing this framework (Lake et al., 2015).
Related papers
- Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.
In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Predicting Steady-State Behavior in Complex Networks with Graph Neural Networks [0.0]
In complex systems, information propagation can be defined as diffused or delocalized, weakly localized, and strongly localized.
This study investigates the application of graph neural network models to learn the behavior of a linear dynamical system on networks.
arXiv Detail & Related papers (2025-02-02T17:29:10Z) - Unified Causality Analysis Based on the Degrees of Freedom [1.2289361708127877]
This paper presents a unified method capable of identifying fundamental causal relationships between pairs of systems.
By analyzing the degrees of freedom in the system, our approach provides a more comprehensive understanding of both causal influence and hidden confounders.
This unified framework is validated through theoretical models and simulations, demonstrating its robustness and potential for broader application.
arXiv Detail & Related papers (2024-10-25T10:57:35Z) - Systems with Switching Causal Relations: A Meta-Causal Perspective [18.752058058199847]
flexibility of agents' actions or tipping points in the environmental process can change the qualitative dynamics of the system.
New causal relationships may emerge, while existing ones change or disappear, resulting in an altered causal graph.
We propose the concept of meta-causal states, which groups classical causal models into clusters based on equivalent qualitative behavior.
arXiv Detail & Related papers (2024-10-16T21:32:31Z) - Predicting Cascading Failures with a Hyperparametric Diffusion Model [66.89499978864741]
We study cascading failures in power grids through the lens of diffusion models.
Our model integrates viral diffusion principles with physics-based concepts.
We show that this diffusion model can be learned from traces of cascading failures.
arXiv Detail & Related papers (2024-06-12T02:34:24Z) - Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks [55.227976642410766]
dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning.<n>Motivated by this, we introduce (port-)Hamiltonian Deep Graph Networks.<n>We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors.
arXiv Detail & Related papers (2024-05-27T13:36:50Z) - Impact of conditional modelling for a universal autoregressive quantum
state [0.0]
We introduce filters as analogues to convolutional layers in neural networks to incorporate translationally symmetrized correlations in arbitrary quantum states.
We analyze the impact of the resulting inductive biases on variational flexibility, symmetries, and conserved quantities.
arXiv Detail & Related papers (2023-06-09T14:17:32Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Localisation in quasiperiodic chains: a theory based on convergence of
local propagators [68.8204255655161]
We present a theory of localisation in quasiperiodic chains with nearest-neighbour hoppings, based on the convergence of local propagators.
Analysing the convergence of these continued fractions, localisation or its absence can be determined, yielding in turn the critical points and mobility edges.
Results are exemplified by analysing the theory for three quasiperiodic models covering a range of behaviour.
arXiv Detail & Related papers (2021-02-18T16:19:52Z) - Bayesian Inductive Learner for Graph Resiliency under uncertainty [1.9254132307399257]
We propose a Bayesian graph neural network-based framework for identifying critical nodes in a large graph.
The fidelity and the gain in computational complexity offered by the framework are illustrated.
arXiv Detail & Related papers (2020-12-26T07:22:29Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.