Temporally Disentangled Representation Learning
- URL: http://arxiv.org/abs/2210.13647v1
- Date: Mon, 24 Oct 2022 23:02:49 GMT
- Title: Temporally Disentangled Representation Learning
- Authors: Weiran Yao, Guangyi Chen, Kun Zhang
- Abstract summary: It is unknown whether the underlying latent variables and their causal relations are identifiable if they have arbitrary, nonparametric causal influences in between.
We propose textbftextttTDRL, a principled framework to recover time-delayed latent causal variables.
Our approach considerably outperforms existing baselines that do not correctly exploit this modular representation of changes.
- Score: 14.762231867144065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently in the field of unsupervised representation learning, strong
identifiability results for disentanglement of causally-related latent
variables have been established by exploiting certain side information, such as
class labels, in addition to independence. However, most existing work is
constrained by functional form assumptions such as independent sources or
further with linear transitions, and distribution assumptions such as
stationary, exponential family distribution. It is unknown whether the
underlying latent variables and their causal relations are identifiable if they
have arbitrary, nonparametric causal influences in between. In this work, we
establish the identifiability theories of nonparametric latent causal processes
from their nonlinear mixtures under fixed temporal causal influences and
analyze how distribution changes can further benefit the disentanglement. We
propose \textbf{\texttt{TDRL}}, a principled framework to recover time-delayed
latent causal variables and identify their relations from measured sequential
data under stationary environments and under different distribution shifts.
Specifically, the framework can factorize unknown distribution shifts into
transition distribution changes under fixed and time-varying latent causal
relations, and under observation changes in observation. Through experiments,
we show that time-delayed latent causal influences are reliably identified and
that our approach considerably outperforms existing baselines that do not
correctly exploit this modular representation of changes. Our code is available
at: \url{https://github.com/weirayao/tdrl}.
Related papers
- On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Temporally Disentangled Representation Learning under Unknown Nonstationarity [35.195001384417395]
We introduce NCTRL, a principled estimation framework, to reconstruct time-delayed latent causal variables.
Empirical evaluations demonstrated the reliable identification of time-delayed latent causal influences.
arXiv Detail & Related papers (2023-10-28T06:46:03Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Learning Latent Causal Dynamics [14.762231867144065]
We propose a principled framework, called LiLY, to first recover time-delayed latent causal variables.
We then identify their relations from measured temporal data under different distribution shifts.
The correction step is then formulated as learning the low-dimensional change factors with a few samples.
arXiv Detail & Related papers (2022-02-10T04:23:32Z) - Learning Temporally Causal Latent Processes from General Temporal Data [22.440008291454287]
We propose two provable conditions under which temporally causal latent processes can be identified from their nonlinear mixtures.
Experimental results on various data sets demonstrate that temporally causal latent processes are reliably identified from observed variables.
arXiv Detail & Related papers (2021-10-11T17:16:19Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.