NTS-NOTEARS: Learning Nonparametric Temporal DAGs With Time-Series Data
and Prior Knowledge
- URL: http://arxiv.org/abs/2109.04286v1
- Date: Thu, 9 Sep 2021 14:08:09 GMT
- Title: NTS-NOTEARS: Learning Nonparametric Temporal DAGs With Time-Series Data
and Prior Knowledge
- Authors: Xiangyu Sun, Guiliang Liu, Pascal Poupart, Oliver Schulte
- Abstract summary: We propose a score-based DAG structure learning method for time-series data.
The proposed method extends nonparametric NOTEARS, a recent continuous optimization approach for learning nonparametric instantaneous DAGs.
- Score: 38.2191204484905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a score-based DAG structure learning method for time-series data
that captures linear, nonlinear, lagged and instantaneous relations among
variables while ensuring acyclicity throughout the entire graph. The proposed
method extends nonparametric NOTEARS, a recent continuous optimization approach
for learning nonparametric instantaneous DAGs. The proposed method is faster
than constraint-based methods using nonlinear conditional independence tests.
We also promote the use of optimization constraints to incorporate prior
knowledge into the structure learning process. A broad set of experiments with
simulated data demonstrates that the proposed method discovers better DAG
structures than several recent comparison methods. We also evaluate the
proposed method on complex real-world data acquired from NHL ice hockey games
containing a mixture of continuous and discrete variables. The code is
available at https://github.com/xiangyu-sun-789/NTS-NOTEARS/.
Related papers
- Sparse Orthogonal Parameters Tuning for Continual Learning [34.462967722928724]
Continual learning methods based on pre-trained models (PTM) have recently gained attention which adapt to successive downstream tasks without catastrophic forgetting.
We propose a novel yet effective method called SoTU (Sparse Orthogonal Parameters TUning)
arXiv Detail & Related papers (2024-11-05T05:19:09Z) - TS-CausalNN: Learning Temporal Causal Relations from Non-linear Non-stationary Time Series Data [0.42156176975445486]
We propose a Time-Series Causal Neural Network (TS-CausalNN) to discover contemporaneous and lagged causal relations simultaneously.
In addition to the simple parallel design, an advantage of the proposed model is that it naturally handles the non-stationarity and non-linearity of the data.
arXiv Detail & Related papers (2024-04-01T20:33:29Z) - AcceleratedLiNGAM: Learning Causal DAGs at the speed of GPUs [57.12929098407975]
We show that by efficiently parallelizing existing causal discovery methods, we can scale them to thousands of dimensions.
Specifically, we focus on the causal ordering subprocedure in DirectLiNGAM and implement GPU kernels to accelerate it.
This allows us to apply DirectLiNGAM to causal inference on large-scale gene expression data with genetic interventions yielding competitive results.
arXiv Detail & Related papers (2024-03-06T15:06:11Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z) - Learning Large DAGs by Combining Continuous Optimization and Feedback
Arc Set Heuristics [0.3553493344868413]
We propose two scalable NPs for learning DAGs in a linear structural equation case.
Our methods learn the DAG by alternating between unconstrained gradient descent-based step to optimize an objective function.
Thanks to this decoupling, our methods scale up beyond thousands of variables.
arXiv Detail & Related papers (2021-07-01T16:10:21Z) - Testing Directed Acyclic Graph via Structural, Supervised and Generative
Adversarial Learning [7.623002328386318]
We propose a new hypothesis testing method for directed acyclic graph (DAG)
We build the test based on some highly flexible neural networks learners.
We demonstrate the efficacy of the test through simulations and a brain connectivity network analysis.
arXiv Detail & Related papers (2021-06-02T21:18:59Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z) - DYNOTEARS: Structure Learning from Time-Series Data [6.7638850283606855]
We propose a method that simultaneously estimates contemporaneous (intra-slice) and time-lagged (inter-slice) relationships between variables in a time-series.
Compared to state-of-the-art methods for learning dynamic Bayesian networks, our method is both scalable and accurate on real data.
arXiv Detail & Related papers (2020-02-02T21:47:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.