Identifying Unique Causal Network from Nonstationary Time Series
- URL: http://arxiv.org/abs/2211.10085v3
- Date: Wed, 30 Aug 2023 00:46:21 GMT
- Title: Identifying Unique Causal Network from Nonstationary Time Series
- Authors: Mingyu Kang and Duxin Chen and Ning Meng and Gang Yan and Wenwu Yu
- Abstract summary: A novel causation model named Unique Causal Network (UCN) is proposed in this paper.
UCN considers the influence of time delay, and proves the uniqueness of obtained network structure.
A higher-order causal entropy (HCE) algorithm is designed to identify the structure of UCN in a distributed way.
- Score: 15.204794081719097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Identifying causality is a challenging task in many data-intensive scenarios.
Many algorithms have been proposed for this critical task. However, most of
them consider the learning algorithms for directed acyclic graph (DAG) of
Bayesian network (BN). These BN-based models only have limited causal
explainability because of the issue of Markov equivalence class. Moreover, they
are dependent on the assumption of stationarity, whereas many sampling time
series from complex system are nonstationary. The nonstationary time series
bring dataset shift problem, which leads to the unsatisfactory performances of
these algorithms. To fill these gaps, a novel causation model named Unique
Causal Network (UCN) is proposed in this paper. Different from the previous
BN-based models, UCN considers the influence of time delay, and proves the
uniqueness of obtained network structure, which addresses the issue of Markov
equivalence class. Furthermore, based on the decomposability property of UCN, a
higher-order causal entropy (HCE) algorithm is designed to identify the
structure of UCN in a distributed way. HCE algorithm measures the strength of
causality by using nearest-neighbors entropy estimator, which works well on
nonstationary time series. Finally, lots of experiments validate that HCE
algorithm achieves state-of-the-art accuracy when time series are
nonstationary, compared to the other baseline algorithms.
Related papers
- Score-matching-based Structure Learning for Temporal Data on Networks [17.166362605356074]
Causal discovery is a crucial initial step in establishing causality from empirical data and background knowledge.
Current score-matching-based algorithms are primarily designed to analyze independent and identically distributed (i.i.d.) data.
We have developed a new parent-finding subroutine for leaf nodes in DAGs, significantly accelerating the most time-consuming part of the process: the pruning step.
arXiv Detail & Related papers (2024-12-10T12:36:35Z) - Causal Discovery in Semi-Stationary Time Series [32.424281626708336]
We propose a constraint-based, non-parametric algorithm for discovering causal relations in observational time series.
We show that this algorithm is sound in identifying causal relations on discrete time series.
arXiv Detail & Related papers (2024-07-10T00:55:38Z) - Inference of Sequential Patterns for Neural Message Passing in Temporal Graphs [0.6562256987706128]
HYPA-DBGNN is a novel two-step approach that combines the inference of anomalous sequential patterns in time series data on graphs.
Our method leverages hypergeometric graph ensembles to identify anomalous edges within both first- and higher-order De Bruijn graphs.
Our work is the first to introduce statistically informed GNNs that leverage temporal and causal sequence anomalies.
arXiv Detail & Related papers (2024-06-24T11:41:12Z) - Concrete Dense Network for Long-Sequence Time Series Clustering [4.307648859471193]
Time series clustering is fundamental in data analysis for discovering temporal patterns.
Deep temporal clustering methods have been trying to integrate the canonical k-means into end-to-end training of neural networks.
LoSTer is a novel dense autoencoder architecture for the long-sequence time series clustering problem.
arXiv Detail & Related papers (2024-05-08T12:31:35Z) - Robust Causal Bandits for Linear Models [20.028245872662843]
Sequential design of experiments for optimizing a reward function in causal systems can be effectively modeled by the sequential design of interventions in causal bandits (CBs)
This paper addresses the robustness of CBs to such model fluctuations.
Cumulative regret is adopted as the design criteria, based on which the objective is to design a sequence of interventions that incur the smallest cumulative regret with respect to an oracle aware of the entire causal model and its fluctuations.
arXiv Detail & Related papers (2023-10-30T17:58:01Z) - CausalTime: Realistically Generated Time-series for Benchmarking of
Causal Discovery [14.092834149864514]
This study introduces the CausalTime pipeline to generate time-series that highly resemble the real data.
The pipeline starts from real observations in a specific scenario and produces a matching benchmark dataset.
In the experiments, we validate the fidelity of the generated data through qualitative and quantitative experiments, followed by a benchmarking of existing TSCD algorithms.
arXiv Detail & Related papers (2023-10-03T02:29:19Z) - Towards Understanding the Generalizability of Delayed Stochastic Gradient Descent [63.43247232708004]
A gradient descent performed in an asynchronous manner plays a crucial role in training large-scale machine learning models.<n>Existing generalization error bounds are rather pessimistic and cannot reveal the correlation between asynchronous delays and generalization.<n>Our theoretical results indicate that asynchronous delays reduce the generalization error of the delayed SGD algorithm.
arXiv Detail & Related papers (2023-08-18T10:00:27Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Networked Time Series Imputation via Position-aware Graph Enhanced
Variational Autoencoders [31.953958053709805]
We design a new model named PoGeVon which leverages variational autoencoder (VAE) to predict missing values over both node time series features and graph structures.
Experiment results demonstrate the effectiveness of our model over baselines.
arXiv Detail & Related papers (2023-05-29T21:11:34Z) - Temporal Aggregation and Propagation Graph Neural Networks for Dynamic
Representation [67.26422477327179]
Temporal graphs exhibit dynamic interactions between nodes over continuous time.
We propose a novel method of temporal graph convolution with the whole neighborhood.
Our proposed TAP-GNN outperforms existing temporal graph methods by a large margin in terms of both predictive performance and online inference latency.
arXiv Detail & Related papers (2023-04-15T08:17:18Z) - On Principal Curve-Based Classifiers and Similarity-Based Selective
Sampling in Time-Series [0.0]
This paper proposes a deterministic selective sampling algorithm with the same computational steps, both by use of principal curve as their building block in model definition.
Considering the labeling costs and problems in online monitoring devices, there should be an algorithm that finds the data points which knowing their labels will cause in better performance of the classifier.
arXiv Detail & Related papers (2022-04-10T07:28:18Z) - Learning to Detect Critical Nodes in Sparse Graphs via Feature Importance Awareness [53.351863569314794]
The critical node problem (CNP) aims to find a set of critical nodes from a network whose deletion maximally degrades the pairwise connectivity of the residual network.
This work proposes a feature importance-aware graph attention network for node representation.
It combines it with dueling double deep Q-network to create an end-to-end algorithm to solve CNP for the first time.
arXiv Detail & Related papers (2021-12-03T14:23:05Z) - Space-Time Graph Neural Networks [104.55175325870195]
We introduce space-time graph neural network (ST-GNN) to jointly process the underlying space-time topology of time-varying network data.
Our analysis shows that small variations in the network topology and time evolution of a system does not significantly affect the performance of ST-GNNs.
arXiv Detail & Related papers (2021-10-06T16:08:44Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Consistency of mechanistic causal discovery in continuous-time using
Neural ODEs [85.7910042199734]
We consider causal discovery in continuous-time for the study of dynamical systems.
We propose a causal discovery algorithm based on penalized Neural ODEs.
arXiv Detail & Related papers (2021-05-06T08:48:02Z) - Normalized multivariate time series causality analysis and causal graph
reconstruction [0.0]
Causality analysis is an important problem lying at the heart of science, and is of particular importance in data science and machine learning.
This study introduces to the community this line of work, with a long-due generalization of the information flow-based bivariate time series causal inference.
The resulting formula is transparent, and can be implemented as a computationally very efficient algorithm for application.
arXiv Detail & Related papers (2021-04-23T00:46:35Z) - Optimal network online change point localisation [73.93301212629231]
We study the problem of online network change point detection.
In this setting, a collection of independent Bernoulli networks is collected sequentially, and the underlying change point occurs.
The goal is to detect the change point as quickly as possible, if it exists, subject to a constraint on the number or probability of false alarms.
arXiv Detail & Related papers (2021-01-14T07:24:39Z) - Spatio-Temporal Graph Scattering Transform [54.52797775999124]
Graph neural networks may be impractical in some real-world scenarios due to a lack of sufficient high-quality training data.
We put forth a novel mathematically designed framework to analyze-temporal data.
arXiv Detail & Related papers (2020-12-06T19:49:55Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z) - Second-Order Guarantees in Centralized, Federated and Decentralized
Nonconvex Optimization [64.26238893241322]
Simple algorithms have been shown to lead to good empirical results in many contexts.
Several works have pursued rigorous analytical justification for studying non optimization problems.
A key insight in these analyses is that perturbations play a critical role in allowing local descent algorithms.
arXiv Detail & Related papers (2020-03-31T16:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.