Learning latent causal relationships in multiple time series
- URL: http://arxiv.org/abs/2203.10679v1
- Date: Mon, 21 Mar 2022 00:20:06 GMT
- Title: Learning latent causal relationships in multiple time series
- Authors: Jacek P. Dmochowski
- Abstract summary: In many systems, the causal relations are embedded in a latent space that is expressed in the observed data as a linear mixture.
A technique for blindly identifying the latent sources is presented.
The proposed technique is unsupervised and can be readily applied to any multiple time series to shed light on the causal relationships underlying the data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Identifying the causal structure of systems with multiple dynamic elements is
critical to several scientific disciplines. The conventional approach is to
conduct statistical tests of causality, for example with Granger Causality,
between observed signals that are selected a priori. Here it is posited that,
in many systems, the causal relations are embedded in a latent space that is
expressed in the observed data as a linear mixture. A technique for blindly
identifying the latent sources is presented: the observations are projected
into pairs of components -- driving and driven -- to maximize the strength of
causality between the pairs. This leads to an optimization problem with closed
form expressions for the objective function and gradient that can be solved
with off-the-shelf techniques. After demonstrating proof-of-concept on
synthetic data with known latent structure, the technique is applied to
recordings from the human brain and historical cryptocurrency prices. In both
cases, the approach recovers multiple strong causal relationships that are not
evident in the observed data. The proposed technique is unsupervised and can be
readily applied to any multiple time series to shed light on the causal
relationships underlying the data.
Related papers
- On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Causal disentanglement of multimodal data [1.589226862328831]
We introduce a causal representation learning algorithm (causalPIMA) that can use multimodal data and known physics to discover important features with causal relationships.
Our results demonstrate the capability of learning an interpretable causal structure while simultaneously discovering key features in a fully unsupervised setting.
arXiv Detail & Related papers (2023-10-27T20:30:11Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Causal discovery for time series with constraint-based model and PMIME
measure [0.0]
We present a novel approach for discovering causality in time series data that combines a causal discovery algorithm with an information theoretic-based measure.
We evaluate the performance of our approach on several simulated data sets, showing promising results.
arXiv Detail & Related papers (2023-05-31T09:38:50Z) - Towards Causal Representation Learning and Deconfounding from Indefinite
Data [17.793702165499298]
Non-statistical data (e.g., images, text, etc.) encounters significant conflicts in terms of properties and methods with traditional causal data.
We redefine causal data from two novel perspectives and then propose three data paradigms.
We implement the above designs as a dynamic variational inference model, tailored to learn causal representation from indefinite data.
arXiv Detail & Related papers (2023-05-04T08:20:37Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z) - Path Signature Area-Based Causal Discovery in Coupled Time Series [0.0]
We propose the application of confidence sequences to analyze the significance of the magnitude of the signed area between two variables.
This approach provides a new way to define the confidence of a causal link existing between two time series.
arXiv Detail & Related papers (2021-10-23T19:57:22Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.