Sparse-Group Log-Sum Penalized Graphical Model Learning For Time Series
- URL: http://arxiv.org/abs/2204.13824v1
- Date: Fri, 29 Apr 2022 00:06:41 GMT
- Title: Sparse-Group Log-Sum Penalized Graphical Model Learning For Time Series
- Authors: Jitendra K Tugnait
- Abstract summary: We consider the problem of inferring the conditional independence graph (CIG) of a stationary multivariate Gaussian time series.
A sparse-group lasso based frequency-domain formulation of the problem has been considered in the literature.
We illustrate our approach utilizing both synthetic and real data.
- Score: 12.843340232167266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of inferring the conditional independence graph (CIG)
of a high-dimensional stationary multivariate Gaussian time series. A
sparse-group lasso based frequency-domain formulation of the problem has been
considered in the literature where the objective is to estimate the sparse
inverse power spectral density (PSD) of the data. The CIG is then inferred from
the estimated inverse PSD. In this paper we investigate use of a sparse-group
log-sum penalty (LSP) instead of sparse-group lasso penalty. An alternating
direction method of multipliers (ADMM) approach for iterative optimization of
the non-convex problem is presented. We provide sufficient conditions for local
convergence in the Frobenius norm of the inverse PSD estimators to the true
value. This results also yields a rate of convergence. We illustrate our
approach using numerical examples utilizing both synthetic and real data.
Related papers
- Constrained Sampling with Primal-Dual Langevin Monte Carlo [15.634831573546041]
This work considers the problem of sampling from a probability distribution known up to a normalization constant.
It satisfies a set of statistical constraints specified by the expected values of general nonlinear functions.
We put forward a discrete-time primal-dual Langevin Monte Carlo algorithm (PD-LMC) that simultaneously constrains the target distribution and samples from it.
arXiv Detail & Related papers (2024-11-01T13:26:13Z) - Asymptotics of Stochastic Gradient Descent with Dropout Regularization in Linear Models [8.555650549124818]
This paper proposes a theory for online inference of the gradient descent (SGD) iterates with dropout regularization in linear regression.
For sufficiently large samples, the proposed confidence intervals for ASGD with dropout nearly achieve the nominal coverage probability.
arXiv Detail & Related papers (2024-09-11T17:28:38Z) - Learning Sparse High-Dimensional Matrix-Valued Graphical Models From Dependent Data [12.94486861344922]
We consider the problem of inferring the conditional independence graph (CIG) of a sparse, high-dimensional, stationary matrix- Gaussian time series.
We consider a sparse-based formulation of the problem with a Kronecker-decomposable power spectral density (PSD)
We illustrate our approach using numerical examples utilizing both synthetic and real data.
arXiv Detail & Related papers (2024-04-29T19:32:50Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates [5.13323375365494]
We provide theoretical guarantees for the convergence behaviour of diffusion-based generative models under strongly log-concave data.
Our class of functions used for score estimation is made of Lipschitz continuous functions avoiding any Lipschitzness assumption on the score function.
This approach yields the best known convergence rate for our sampling algorithm.
arXiv Detail & Related papers (2023-11-22T18:40:45Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Approximating a RUM from Distributions on k-Slates [88.32814292632675]
We find a generalization-time algorithm that finds the RUM that best approximates the given distribution on average.
Our theoretical result can also be made practical: we obtain a that is effective and scales to real-world datasets.
arXiv Detail & Related papers (2023-05-22T17:43:34Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - On Sparse High-Dimensional Graphical Model Learning For Dependent Time Series [12.94486861344922]
We consider the problem of inferring the conditional independence graph (CIG) of a sparse, high-dimensional stationary time series.
A sparse-group lasso-based frequency-domain formulation of the problem is presented.
We also empirically investigate selection of the tuning parameters based on Bayesian information criterion.
arXiv Detail & Related papers (2021-11-15T16:52:02Z) - Preventing Posterior Collapse with Levenshtein Variational Autoencoder [61.30283661804425]
We propose to replace the evidence lower bound (ELBO) with a new objective which is simple to optimize and prevents posterior collapse.
We show that Levenstein VAE produces more informative latent representations than alternative approaches to preventing posterior collapse.
arXiv Detail & Related papers (2020-04-30T13:27:26Z) - Generative Modeling with Denoising Auto-Encoders and Langevin Sampling [88.83704353627554]
We show that both DAE and DSM provide estimates of the score of the smoothed population density.
We then apply our results to the homotopy method of arXiv:1907.05600 and provide theoretical justification for its empirical success.
arXiv Detail & Related papers (2020-01-31T23:50:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.