LOCAL: Learning with Orientation Matrix to Infer Causal Structure from Time Series Data
- URL: http://arxiv.org/abs/2410.19464v2
- Date: Mon, 28 Oct 2024 01:44:41 GMT
- Title: LOCAL: Learning with Orientation Matrix to Infer Causal Structure from Time Series Data
- Authors: Yue Cheng, Jiajun Zhang, Weiwei Xing, Xiaoyu Guo, Xiaohui Gao,
- Abstract summary: LOCAL is a highly efficient, easy-to-implement, and constraint-free method for recovering dynamic causal structures.
ACML generates causal masks using learnable priority vectors and the Gumbel-Sigmoid function.
DGPL transforms causal learning into decomposed matrix products, capturing the dynamic causal structure of high-dimensional data.
- Score: 13.390666123493409
- License:
- Abstract: Discovering the underlying Directed Acyclic Graph (DAG) from time series observational data is highly challenging due to the dynamic nature and complex nonlinear interactions between variables. Existing methods often struggle with inefficiency and the handling of high-dimensional data. To address these research gap, we propose LOCAL, a highly efficient, easy-to-implement, and constraint-free method for recovering dynamic causal structures. LOCAL is the first attempt to formulate a quasi-maximum likelihood-based score function for learning the dynamic DAG equivalent to the ground truth. On this basis, we propose two adaptive modules for enhancing the algebraic characterization of acyclicity with new capabilities: Asymptotic Causal Mask Learning (ACML) and Dynamic Graph Parameter Learning (DGPL). ACML generates causal masks using learnable priority vectors and the Gumbel-Sigmoid function, ensuring the creation of DAGs while optimizing computational efficiency. DGPL transforms causal learning into decomposed matrix products, capturing the dynamic causal structure of high-dimensional data and enhancing interpretability. Extensive experiments on synthetic and real-world datasets demonstrate that LOCAL significantly outperforms existing methods, and highlight LOCAL's potential as a robust and efficient method for dynamic causal discovery. Our code will be available soon.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in low-data regimes [57.62036621319563]
We introduce CLLM, which leverages the prior knowledge of Large Language Models (LLMs) for data augmentation in the low-data regime.
We demonstrate the superior performance of CLLM in the low-data regime compared to conventional generators.
arXiv Detail & Related papers (2023-12-19T12:34:46Z) - Recovering Linear Causal Models with Latent Variables via Cholesky
Factorization of Covariance Matrix [21.698480201955213]
We propose a DAG structure recovering algorithm, which is based on the Cholesky factorization of the covariance matrix of the observed data.
On synthetic and real-world datasets, the algorithm is significantly faster than previous methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2023-11-01T17:27:49Z) - PREM: A Simple Yet Effective Approach for Node-Level Graph Anomaly
Detection [65.24854366973794]
Node-level graph anomaly detection (GAD) plays a critical role in identifying anomalous nodes from graph-structured data in domains such as medicine, social networks, and e-commerce.
We introduce a simple method termed PREprocessing and Matching (PREM for short) to improve the efficiency of GAD.
Our approach streamlines GAD, reducing time and memory consumption while maintaining powerful anomaly detection capabilities.
arXiv Detail & Related papers (2023-10-18T02:59:57Z) - Fast Latent Factor Analysis via a Fuzzy PID-Incorporated Stochastic
Gradient Descent Algorithm [1.984879854062214]
A gradient descent (SGD)-based latent factor analysis model is remarkably effective in extracting valuable information from an HDI matrix.
A standard SGD algorithm learns a latent factor relying on the gradient of current instance error only without considering past update information.
This paper proposes a Fuzzy PID-incorporated SGD algorithm with two-fold ideas: 1) rebuilding the instance error by considering the past update information in an efficient way following the principle of PID, and 2) implementing hyper-learnings and gain adaptation following the fuzzy rules.
arXiv Detail & Related papers (2023-03-07T14:51:09Z) - Directed Acyclic Graph Structure Learning from Dynamic Graphs [44.21230819336437]
Estimating the structure of directed acyclic graphs (DAGs) of features (variables) plays a vital role in revealing the latent data generation process.
We study the learning problem of node feature generation mechanism on such ubiquitous dynamic graph data.
arXiv Detail & Related papers (2022-11-30T14:22:01Z) - Efficient Neural Causal Discovery without Acyclicity Constraints [30.08586535981525]
We present ENCO, an efficient structure learning method for directed, acyclic causal graphs.
In experiments, we show that ENCO can efficiently recover graphs with hundreds of nodes, an order of magnitude larger than what was previously possible.
arXiv Detail & Related papers (2021-07-22T07:01:41Z) - DAGs with No Curl: An Efficient DAG Structure Learning Approach [62.885572432958504]
Recently directed acyclic graph (DAG) structure learning is formulated as a constrained continuous optimization problem with continuous acyclicity constraints.
We propose a novel learning framework to model and learn the weighted adjacency matrices in the DAG space directly.
We show that our method provides comparable accuracy but better efficiency than baseline DAG structure learning methods on both linear and generalized structural equation models.
arXiv Detail & Related papers (2021-06-14T07:11:36Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.