Quantized Low-Rank Multivariate Regression with Random Dithering
- URL: http://arxiv.org/abs/2302.11197v3
- Date: Sat, 7 Oct 2023 02:07:42 GMT
- Title: Quantized Low-Rank Multivariate Regression with Random Dithering
- Authors: Junren Chen, Yueqi Wang, Michael K. Ng
- Abstract summary: Low-rank multivariate regression (LRMR) is an important statistical learning model.
We focus on the estimation of the underlying coefficient matrix.
We employ uniform quantization with random dithering, i.e., we add appropriate random noise to the data before quantization.
We propose the constrained Lasso and regularized Lasso estimators, and derive the non-asymptotic error bounds.
- Score: 23.81618208119832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-rank multivariate regression (LRMR) is an important statistical learning
model that combines highly correlated tasks as a multiresponse regression
problem with low-rank priori on the coefficient matrix. In this paper, we study
quantized LRMR, a practical setting where the responses and/or the covariates
are discretized to finite precision. We focus on the estimation of the
underlying coefficient matrix. To make consistent estimator that could achieve
arbitrarily small error possible, we employ uniform quantization with random
dithering, i.e., we add appropriate random noise to the data before
quantization. Specifically, uniform dither and triangular dither are used for
responses and covariates, respectively. Based on the quantized data, we propose
the constrained Lasso and regularized Lasso estimators, and derive the
non-asymptotic error bounds. With the aid of dithering, the estimators achieve
minimax optimal rate, while quantization only slightly worsens the
multiplicative factor in the error rate. Moreover, we extend our results to a
low-rank regression model with matrix responses. We corroborate and demonstrate
our theoretical results via simulations on synthetic data or image restoration.
Related papers
- Semiparametric conformal prediction [79.6147286161434]
Risk-sensitive applications require well-calibrated prediction sets over multiple, potentially correlated target variables.
We treat the scores as random vectors and aim to construct the prediction set accounting for their joint correlation structure.
We report desired coverage and competitive efficiency on a range of real-world regression problems.
arXiv Detail & Related papers (2024-11-04T14:29:02Z) - Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Distributed High-Dimensional Quantile Regression: Estimation Efficiency and Support Recovery [0.0]
We focus on distributed estimation and support recovery for high-dimensional linear quantile regression.
We transform the original quantile regression into the least-squares optimization.
An efficient algorithm is developed, which enjoys high computation and communication efficiency.
arXiv Detail & Related papers (2024-05-13T08:32:22Z) - On the design-dependent suboptimality of the Lasso [27.970033039287884]
We show that the Lasso estimator is provably minimax rate-suboptimal when the minimum singular value is small.
Our lower bound is strong enough to preclude the sparse statistical optimality of all forms of the Lasso.
arXiv Detail & Related papers (2024-02-01T07:01:54Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Quantizing Heavy-tailed Data in Statistical Estimation: (Near) Minimax
Rates, Covariate Quantization, and Uniform Recovery [22.267482745621997]
This paper studies the quantization of heavy-tailed data in some fundamental statistical estimation problems.
We propose to truncate and properly dither the data prior to a uniform quantization.
arXiv Detail & Related papers (2022-12-30T06:28:30Z) - Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient
Estimator [93.05919133288161]
We show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through Rao-Blackwellization.
This provably reduces the mean squared error.
We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
arXiv Detail & Related papers (2020-10-09T22:54:38Z) - Robust regression with covariate filtering: Heavy tails and adversarial
contamination [6.939768185086755]
We show how to modify the Huber regression, least trimmed squares, and least absolute deviation estimators to obtain estimators simultaneously computationally and statistically efficient in the stronger contamination model.
We show that the Huber regression estimator achieves near-optimal error rates in this setting, whereas the least trimmed squares and least absolute deviation estimators can be made to achieve near-optimal error after applying a postprocessing step.
arXiv Detail & Related papers (2020-09-27T22:48:48Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.