Hierarchical Representations for Evolving Acyclic Vector Autoregressions (HEAVe)
- URL: http://arxiv.org/abs/2505.12806v1
- Date: Mon, 19 May 2025 07:33:01 GMT
- Title: Hierarchical Representations for Evolving Acyclic Vector Autoregressions (HEAVe)
- Authors: Cameron Cornell, Lewis Mitchell, Matthew Roughan,
- Abstract summary: Causal networks offer an intuitive framework to understand influence structures within time series systems.<n>The presence of cycles can obscure dynamic relationships and hinder hierarchical analysis.<n>We propose an evolutionary approach to fitting acyclic vector autoregressive processes and introduce a novel hierarchical representation.
- Score: 0.04096453902709291
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal networks offer an intuitive framework to understand influence structures within time series systems. However, the presence of cycles can obscure dynamic relationships and hinder hierarchical analysis. These networks are typically identified through multivariate predictive modelling, but enforcing acyclic constraints significantly increases computational and analytical complexity. Despite recent advances, there remains a lack of simple, flexible approaches that are easily tailorable to specific problem instances. We propose an evolutionary approach to fitting acyclic vector autoregressive processes and introduces a novel hierarchical representation that directly models structural elements within a time series system. On simulated datasets, our model retains most of the predictive accuracy of unconstrained models and outperforms permutation-based alternatives. When applied to a dataset of 100 cryptocurrency return series, our method generates acyclic causal networks capturing key structural properties of the unconstrained model. The acyclic networks are approximately sub-graphs of the unconstrained networks, and most of the removed links originate from low-influence nodes. Given the high levels of feature preservation, we conclude that this cryptocurrency price system functions largely hierarchically. Our findings demonstrate a flexible, intuitive approach for identifying hierarchical causal networks in time series systems, with broad applications to fields like econometrics and social network analysis.
Related papers
- Multi-Agent Q-Learning Dynamics in Random Networks: Convergence due to Exploration and Sparsity [5.925608009772727]
We study Q-learning dynamics in network polymatrix games where the network structure is drawn from random graph models.<n>In each setting, we establish sufficient conditions under which the agents' joint strategies converge to a unique equilibrium.<n>We validate our theoretical findings through numerical simulations and demonstrate that convergence can be reliably achieved in many-agent systems.
arXiv Detail & Related papers (2025-03-13T09:16:51Z) - Discovering Message Passing Hierarchies for Mesh-Based Physics Simulation [61.89682310797067]
We introduce DHMP, which learns Dynamic Hierarchies for Message Passing networks through a differentiable node selection method.
Our experiments demonstrate the effectiveness of DHMP, achieving 22.7% improvement on average compared to recent fixed-hierarchy message passing networks.
arXiv Detail & Related papers (2024-10-03T15:18:00Z) - Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Generating fine-grained surrogate temporal networks [12.7211231166069]
We propose a novel and simple method for generating surrogate temporal networks.
Our method decomposes the input network into star-like structures evolving in time.
Then those structures are used as building blocks to generate a surrogate temporal network.
arXiv Detail & Related papers (2022-05-18T09:38:22Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - A purely data-driven framework for prediction, optimization, and control
of networked processes: application to networked SIS epidemic model [0.8287206589886881]
We develop a data-driven framework based on operator-theoretic techniques to identify and control nonlinear dynamics over large-scale networks.
The proposed approach requires no prior knowledge of the network structure and identifies the underlying dynamics solely using a collection of two-step snapshots of the states.
arXiv Detail & Related papers (2021-08-01T03:57:10Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Consistency of Spectral Clustering on Hierarchical Stochastic Block
Models [5.983753938303726]
We study the hierarchy of communities in real-world networks under a generic block model.
We prove the strong consistency of this method under a wide range of model parameters.
Unlike most of existing work, our theory covers multiscale networks where the connection probabilities may differ by orders of magnitude.
arXiv Detail & Related papers (2020-04-30T01:08:59Z) - Shift Aggregate Extract Networks [3.3263205689999453]
We introduce an architecture based on deep hierarchical decompositions to learn effective representations of large graphs.
Our framework extends classic R-decompositions used in kernel methods, enabling nested part-of-part relations.
We show empirically that our approach is able to outperform current state-of-the-art graph classification methods on large social network datasets.
arXiv Detail & Related papers (2017-03-16T09:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.