Concentration inequalities for high-dimensional linear processes with dependent innovations
- URL: http://arxiv.org/abs/2307.12395v2
- Date: Thu, 17 Oct 2024 15:31:22 GMT
- Title: Concentration inequalities for high-dimensional linear processes with dependent innovations
- Authors: Eduardo Fonseca Mendes, Fellipe Lopes,
- Abstract summary: We develop concentration inequalities for the $l_infty$ norm of vector linear processes with sub-Weibull, mixingale innovations.
We apply these inequalities to sparse estimation of large-dimensional VAR(p) systems and heterocedasticity and autocorrelation consistent (HAC) high-dimensional covariance estimation.
- Score: 0.0
- License:
- Abstract: We develop concentration inequalities for the $l_\infty$ norm of vector linear processes with sub-Weibull, mixingale innovations. This inequality is used to obtain a concentration bound for the maximum entrywise norm of the lag-$h$ autocovariance matrix of linear processes. We apply these inequalities to sparse estimation of large-dimensional VAR(p) systems and heterocedasticity and autocorrelation consistent (HAC) high-dimensional covariance estimation.
Related papers
- Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence [65.63201894457404]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.
The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
arXiv Detail & Related papers (2023-05-24T20:43:47Z) - D-SVM over Networked Systems with Non-Ideal Linking Conditions [5.962184741057505]
This paper considers distributed optimization algorithms, with application in binary classification via distributed support-vector-machines (D-SVM)
The agents solve a consensus-constraint distributed optimization cooperatively via continuous-time dynamics, while the links are subject to strongly sign-preserving odd nonlinear conditions.
Logarithmic quantization and clipping (saturation) are two examples of such nonlinearities.
arXiv Detail & Related papers (2023-04-13T16:56:57Z) - SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities [137.6408511310322]
We consider the problem of finite-sum cocoercive variational inequalities.
For strongly monotone problems it is possible to achieve linear convergence to a solution using this method.
arXiv Detail & Related papers (2022-10-12T08:04:48Z) - On Linear Separability under Linear Compression with Applications to
Hard Support Vector Machine [0.0]
We show that linear separability is maintained as long as the distortion of the inner products is smaller than the squared margin of the original data-generating distribution.
As applications, we derive bounds on the (i) compression length of random sub-Gaussian matrices; and (ii) generalization error for compressive learning with hard-SVM.
arXiv Detail & Related papers (2022-02-02T16:23:01Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Concentration Inequalities for Statistical Inference [3.236217153362305]
This paper gives a review of concentration inequalities which are widely employed in non-asymptotical analyses of mathematical statistics.
We aim to illustrate the concentration inequalities with known constants and to improve existing bounds with sharper constants.
arXiv Detail & Related papers (2020-11-04T12:54:06Z) - Optimal Sample Complexity of Subgradient Descent for Amplitude Flow via
Non-Lipschitz Matrix Concentration [12.989855325491163]
We consider the problem of recovering a real-valued $n$-dimensional signal from $m$ phaseless, linear measurements.
We establish local convergence of subgradient descent with optimal sample complexity based on the uniform concentration of a random, discontinuous matrix-valued operator.
arXiv Detail & Related papers (2020-10-31T15:03:30Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.