Causal Recurrent Variational Autoencoder for Medical Time Series
Generation
- URL: http://arxiv.org/abs/2301.06574v1
- Date: Mon, 16 Jan 2023 19:13:33 GMT
- Title: Causal Recurrent Variational Autoencoder for Medical Time Series
Generation
- Authors: Hongming Li, Shujian Yu, Jose Principe
- Abstract summary: We propose causal recurrent variational autoencoder (CR-VAE), a novel generative model that learns a Granger causal graph from a time series x.
Our model consistently outperforms state-of-the-art time series generative models both qualitatively and quantitatively.
- Score: 12.82521953179345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose causal recurrent variational autoencoder (CR-VAE), a novel
generative model that is able to learn a Granger causal graph from a
multivariate time series x and incorporates the underlying causal mechanism
into its data generation process. Distinct to the classical recurrent VAEs, our
CR-VAE uses a multi-head decoder, in which the $p$-th head is responsible for
generating the $p$-th dimension of $\mathbf{x}$ (i.e., $\mathbf{x}^p$). By
imposing a sparsity-inducing penalty on the weights (of the decoder) and
encouraging specific sets of weights to be zero, our CR-VAE learns a sparse
adjacency matrix that encodes causal relations between all pairs of variables.
Thanks to this causal matrix, our decoder strictly obeys the underlying
principles of Granger causality, thereby making the data generating process
transparent. We develop a two-stage approach to train the overall objective.
Empirically, we evaluate the behavior of our model in synthetic data and two
real-world human brain datasets involving, respectively, the
electroencephalography (EEG) signals and the functional magnetic resonance
imaging (fMRI) data. Our model consistently outperforms state-of-the-art time
series generative models both qualitatively and quantitatively. Moreover, it
also discovers a faithful causal graph with similar or improved accuracy over
existing Granger causality-based causal inference methods. Code of CR-VAE is
publicly available at https://github.com/hongmingli1995/CR-VAE.
Related papers
- Scaling Laws in Linear Regression: Compute, Parameters, and Data [86.48154162485712]
We study the theory of scaling laws in an infinite dimensional linear regression setup.
We show that the reducible part of the test error is $Theta(-(a-1) + N-(a-1)/a)$.
Our theory is consistent with the empirical neural scaling laws and verified by numerical simulation.
arXiv Detail & Related papers (2024-06-12T17:53:29Z) - Bayesian Vector AutoRegression with Factorised Granger-Causal Graphs [10.030023978159978]
We study the problem of automatically discovering Granger causal relations from observational time-series data.
We propose a new Bayesian VAR model with a hierarchical factorised prior distribution over binary Granger causal graphs.
We develop an efficient algorithm to infer the posterior over binary Granger causal graphs.
arXiv Detail & Related papers (2024-02-06T01:01:23Z) - $t^3$-Variational Autoencoder: Learning Heavy-tailed Data with Student's
t and Power Divergence [7.0479532872043755]
$t3$VAE is a modified VAE framework that incorporates Student's t-distributions for the prior, encoder, and decoder.
We show that $t3$VAE significantly outperforms other models on CelebA and imbalanced CIFAR-100 datasets.
arXiv Detail & Related papers (2023-12-02T13:14:28Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias
Correction of Deep Models [11.879170124003252]
State-of-the-art machine learning models often learn spurious correlations embedded in the training data.
This poses risks when deploying these models for high-stake decision-making.
We propose Reveal to Revise (R2R) to identify, mitigate, and (re-)evaluate spurious model behavior.
arXiv Detail & Related papers (2023-03-22T15:23:09Z) - Recovering Barab\'asi-Albert Parameters of Graphs through
Disentanglement [0.0]
Graph modeling approaches such as ErdHos R'enyi (ER) random graphs or Barab'asi-Albert (BA) graphs aim to reproduce properties of real-world graphs in an interpretable way.
Previous work by Stoehr et al. addresses these issues by learning the generation process from graph data.
We focus on recovering the generative parameters of BA graphs by replacing their $beta$-VAE decoder with a sequential one.
arXiv Detail & Related papers (2021-05-03T16:45:43Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - Brain Image Synthesis with Unsupervised Multivariate Canonical
CSC$\ell_4$Net [122.8907826672382]
We propose to learn dedicated features that cross both intre- and intra-modal variations using a novel CSC$ell_4$Net.
arXiv Detail & Related papers (2021-03-22T05:19:40Z) - Knowledge Generation -- Variational Bayes on Knowledge Graphs [0.685316573653194]
This thesis is a proof of concept for potential of Vari Auto-Encoder (VAE) on representation of real-world Knowledge Graphs.
Inspired by successful approaches to generation graphs, we evaluate the capabilities of our model, the Variational Auto-Encoder (RGVAE)
The RGVAE is first evaluated on link prediction. The mean reciprocal rank (MRR) scores on the two FB15K-237 and WN18RR datasets are compared.
We investigate the latent space in a twofold experiment: first, linear between the latent representation of two triples, then the exploration of each
arXiv Detail & Related papers (2021-01-21T21:23:17Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.