On Sparse High-Dimensional Graphical Model Learning For Dependent Time Series
- URL: http://arxiv.org/abs/2111.07897v3
- Date: Tue, 4 Jun 2024 18:14:13 GMT
- Title: On Sparse High-Dimensional Graphical Model Learning For Dependent Time Series
- Authors: Jitendra K. Tugnait,
- Abstract summary: We consider the problem of inferring the conditional independence graph (CIG) of a sparse, high-dimensional stationary time series.
A sparse-group lasso-based frequency-domain formulation of the problem is presented.
We also empirically investigate selection of the tuning parameters based on Bayesian information criterion.
- Score: 12.94486861344922
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of inferring the conditional independence graph (CIG) of a sparse, high-dimensional stationary multivariate Gaussian time series. A sparse-group lasso-based frequency-domain formulation of the problem based on frequency-domain sufficient statistic for the observed time series is presented. We investigate an alternating direction method of multipliers (ADMM) approach for optimization of the sparse-group lasso penalized log-likelihood. We provide sufficient conditions for convergence in the Frobenius norm of the inverse PSD estimators to the true value, jointly across all frequencies, where the number of frequencies are allowed to increase with sample size. This results also yields a rate of convergence. We also empirically investigate selection of the tuning parameters based on Bayesian information criterion, and illustrate our approach using numerical examples utilizing both synthetic and real data.
Related papers
- Learning Sparse High-Dimensional Matrix-Valued Graphical Models From Dependent Data [12.94486861344922]
We consider the problem of inferring the conditional independence graph (CIG) of a sparse, high-dimensional, stationary matrix- Gaussian time series.
We consider a sparse-based formulation of the problem with a Kronecker-decomposable power spectral density (PSD)
We illustrate our approach using numerical examples utilizing both synthetic and real data.
arXiv Detail & Related papers (2024-04-29T19:32:50Z) - On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates [5.13323375365494]
We provide theoretical guarantees for the convergence behaviour of diffusion-based generative models under the assumption of strongly log-concave data distributions.
We demonstrate via a motivating example, sampling from a Gaussian distribution with unknown mean, the powerfulness of our approach.
This approach yields the best known convergence rate for our sampling algorithm.
arXiv Detail & Related papers (2023-11-22T18:40:45Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Conformal Frequency Estimation using Discrete Sketched Data with
Coverage for Distinct Queries [35.67445122503686]
This paper develops conformal inference methods to construct a confidence interval for the frequency of a queried object in a very large discrete data set.
We show our methods have improved empirical performance compared to existing frequentist and Bayesian alternatives in simulations.
arXiv Detail & Related papers (2022-11-09T00:05:29Z) - Sparse-Group Log-Sum Penalized Graphical Model Learning For Time Series [12.843340232167266]
We consider the problem of inferring the conditional independence graph (CIG) of a stationary multivariate Gaussian time series.
A sparse-group lasso based frequency-domain formulation of the problem has been considered in the literature.
We illustrate our approach utilizing both synthetic and real data.
arXiv Detail & Related papers (2022-04-29T00:06:41Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - On Learning Continuous Pairwise Markov Random Fields [33.38669988203501]
We consider learning a sparse pairwise Markov Random Field (MRF) with continuous variables from i.i.d samples.
Our approach is applicable to a large class of pairwise MRFs with continuous variables and also has desirable properties, including consistency and normality under mild conditions.
arXiv Detail & Related papers (2020-10-28T15:09:43Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Generative Modeling with Denoising Auto-Encoders and Langevin Sampling [88.83704353627554]
We show that both DAE and DSM provide estimates of the score of the smoothed population density.
We then apply our results to the homotopy method of arXiv:1907.05600 and provide theoretical justification for its empirical success.
arXiv Detail & Related papers (2020-01-31T23:50:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.