Autoregressive Asymmetric Linear Gaussian Hidden Markov Models
- URL: http://arxiv.org/abs/2010.15604v1
- Date: Tue, 27 Oct 2020 08:58:46 GMT
- Title: Autoregressive Asymmetric Linear Gaussian Hidden Markov Models
- Authors: Carlos Puerto-Santana and Pedro Larra\~naga and Concha Bielza
- Abstract summary: Asymmetric hidden Markov models provide a framework where the trend of the process can be expressed as a latent variable.
We show how inference, hidden states decoding and parameter learning must be adapted to fit the proposed model.
- Score: 1.332091725929965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a real life process evolving over time, the relationship between its
relevant variables may change. Therefore, it is advantageous to have different
inference models for each state of the process. Asymmetric hidden Markov models
fulfil this dynamical requirement and provide a framework where the trend of
the process can be expressed as a latent variable. In this paper, we modify
these recent asymmetric hidden Markov models to have an asymmetric
autoregressive component, allowing the model to choose the order of
autoregression that maximizes its penalized likelihood for a given training
set. Additionally, we show how inference, hidden states decoding and parameter
learning must be adapted to fit the proposed model. Finally, we run experiments
with synthetic and real data to show the capabilities of this new model.
Related papers
- EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - A Generative Model of Symmetry Transformations [44.87295754993983]
We build a generative model that explicitly aims to capture the data's approximate symmetries.
We empirically demonstrate its ability to capture symmetries under affine and color transformations.
arXiv Detail & Related papers (2024-03-04T11:32:18Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - Online Variational Filtering and Parameter Learning [26.79116194327116]
We present a variational method for online state estimation and parameter learning in state-space models (SSMs)
We use gradients to simultaneously optimize a lower bound on the log evidence with respect to both model parameters and a variational approximation of the states' posterior distribution.
Unlike existing approaches, our method is able to operate in an entirely online manner, such that historic observations do not require revisitation after being incorporated and the cost of updates at each time step remains constant.
arXiv Detail & Related papers (2021-10-26T10:25:04Z) - A moment-matching metric for latent variable generative models [0.0]
In scope of Goodhart's law, when a metric becomes a target it ceases to be a good metric.
We propose a new metric for model comparison or regularization that relies on moments.
It is common to draw samples from the fitted distribution when evaluating latent variable models.
arXiv Detail & Related papers (2021-10-04T17:51:08Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Active and sparse methods in smoothed model checking [2.28438857884398]
We consider extensions to smoothed model checking based on sparse variational methods and active learning.
Online extensions of sparse variational Gaussian process inference algorithms are demonstrated to provide a scalable method for implementing active learning approaches for smoothed model checking.
arXiv Detail & Related papers (2021-04-20T13:03:25Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Variational Mixture of Normalizing Flows [0.0]
Deep generative models, such as generative adversarial networks autociteGAN, variational autoencoders autocitevaepaper, and their variants, have seen wide adoption for the task of modelling complex data distributions.
Normalizing flows have overcome this limitation by leveraging the change-of-suchs formula for probability density functions.
The present work overcomes this by using normalizing flows as components in a mixture model and devising an end-to-end training procedure for such a model.
arXiv Detail & Related papers (2020-09-01T17:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.