Bayesian Inference in High-Dimensional Time-Serieswith the Orthogonal
Stochastic Linear Mixing Model
- URL: http://arxiv.org/abs/2106.13379v1
- Date: Fri, 25 Jun 2021 01:12:54 GMT
- Title: Bayesian Inference in High-Dimensional Time-Serieswith the Orthogonal
Stochastic Linear Mixing Model
- Authors: Rui Meng, Kristofer Bouchard
- Abstract summary: Many modern time-series datasets contain large numbers of output response variables sampled for prolonged periods of time.
In this paper, we propose a new Markov chain Monte Carlo framework for the analysis of diverse, large-scale time-series datasets.
- Score: 2.7909426811685893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many modern time-series datasets contain large numbers of output response
variables sampled for prolonged periods of time. For example, in neuroscience,
the activities of 100s-1000's of neurons are recorded during behaviors and in
response to sensory stimuli. Multi-output Gaussian process models leverage the
nonparametric nature of Gaussian processes to capture structure across multiple
outputs. However, this class of models typically assumes that the correlations
between the output response variables are invariant in the input space.
Stochastic linear mixing models (SLMM) assume the mixture coefficients depend
on input, making them more flexible and effective to capture complex output
dependence. However, currently, the inference for SLMMs is intractable for
large datasets, making them inapplicable to several modern time-series
problems. In this paper, we propose a new regression framework, the orthogonal
stochastic linear mixing model (OSLMM) that introduces an orthogonal constraint
amongst the mixing coefficients. This constraint reduces the computational
burden of inference while retaining the capability to handle complex output
dependence. We provide Markov chain Monte Carlo inference procedures for both
SLMM and OSLMM and demonstrate superior model scalability and reduced
prediction error of OSLMM compared with state-of-the-art methods on several
real-world applications. In neurophysiology recordings, we use the inferred
latent functions for compact visualization of population responses to auditory
stimuli, and demonstrate superior results compared to a competing method
(GPFA). Together, these results demonstrate that OSLMM will be useful for the
analysis of diverse, large-scale time-series datasets.
Related papers
- Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling [22.256068524699472]
In this work, we propose an Annealed Importance Sampling (AIS) approach to address these issues.
We combine the strengths of Sequential Monte Carlo samplers and VI to explore a wider range of posterior distributions and gradually approach the target distribution.
Experimental results on both toy and image datasets demonstrate that our method outperforms state-of-the-art methods in terms of tighter variational bounds, higher log-likelihoods, and more robust convergence.
arXiv Detail & Related papers (2024-08-13T08:09:05Z) - Scalable Multi-Output Gaussian Processes with Stochastic Variational Inference [2.1249213103048414]
The Latent Variable MOGP (LV-MOGP) allows efficient generalization to new outputs with few data points.
complexity in LV-MOGP grows linearly with the number of outputs.
We propose a variational inference approach for the LV-MOGP that allows mini-batches for both inputs and outputs.
arXiv Detail & Related papers (2024-07-02T17:53:56Z) - Fast Semisupervised Unmixing Using Nonconvex Optimization [80.11512905623417]
We introduce a novel convex convex model for semi/library-based unmixing.
We demonstrate the efficacy of Alternating Methods of sparse unsupervised unmixing.
arXiv Detail & Related papers (2024-01-23T10:07:41Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - B-LSTM-MIONet: Bayesian LSTM-based Neural Operators for Learning the
Response of Complex Dynamical Systems to Length-Variant Multiple Input
Functions [6.75867828529733]
Multiple-input deep neural operators (MIONet) extended DeepONet to allow multiple input functions in different Banach spaces.
MIONet offers flexibility in training dataset grid spacing, without constraints on output location.
This work redesigns MIONet, integrating Long Short Term Memory (LSTM) to learn neural operators from time-dependent data.
arXiv Detail & Related papers (2023-11-28T04:58:17Z) - Approximate Gibbs Sampler for Efficient Inference of Hierarchical Bayesian Models for Grouped Count Data [0.0]
This research develops an approximate Gibbs sampler (AGS) to efficiently learn the HBPRMs while maintaining the inference accuracy.
Numerical experiments using real and synthetic datasets with small and large counts demonstrate the superior performance of AGS.
arXiv Detail & Related papers (2022-11-28T21:00:55Z) - Collaborative Nonstationary Multivariate Gaussian Process Model [2.362467745272567]
We propose a novel model called the collaborative nonstationary Gaussian process model(CNMGP)
CNMGP allows us to model data in which outputs do not share a common input set, with a computational complexity independent of the size of the inputs and outputs.
We show that our model generally pro-vides better predictive performance than the state-of-the-art, and also provides estimates of time-varying correlations that differ across outputs.
arXiv Detail & Related papers (2021-06-01T18:25:22Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z) - Generalized Matrix Factorization: efficient algorithms for fitting
generalized linear latent variable models to large data arrays [62.997667081978825]
Generalized Linear Latent Variable models (GLLVMs) generalize such factor models to non-Gaussian responses.
Current algorithms for estimating model parameters in GLLVMs require intensive computation and do not scale to large datasets.
We propose a new approach for fitting GLLVMs to high-dimensional datasets, based on approximating the model using penalized quasi-likelihood.
arXiv Detail & Related papers (2020-10-06T04:28:19Z) - Modal Regression based Structured Low-rank Matrix Recovery for
Multi-view Learning [70.57193072829288]
Low-rank Multi-view Subspace Learning has shown great potential in cross-view classification in recent years.
Existing LMvSL based methods are incapable of well handling view discrepancy and discriminancy simultaneously.
We propose Structured Low-rank Matrix Recovery (SLMR), a unique method of effectively removing view discrepancy and improving discriminancy.
arXiv Detail & Related papers (2020-03-22T03:57:38Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.