Reduction of the Number of Variables in Parametric Constrained
Least-Squares Problems
- URL: http://arxiv.org/abs/2012.10423v1
- Date: Fri, 18 Dec 2020 18:26:40 GMT
- Title: Reduction of the Number of Variables in Parametric Constrained
Least-Squares Problems
- Authors: Alberto Bemporad, Gionata Cimini
- Abstract summary: This paper proposes techniques for reducing the number of involved optimization variables.
We show the good performance of the proposed techniques in numerical tests and in a linearized MPC problem of a nonlinear benchmark process.
- Score: 0.20305676256390928
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For linearly constrained least-squares problems that depend on a vector of
parameters, this paper proposes techniques for reducing the number of involved
optimization variables. After first eliminating equality constraints in a
numerically robust way by QR factorization, we propose a technique based on
singular value decomposition (SVD) and unsupervised learning, that we call
$K$-SVD, and neural classifiers to automatically partition the set of parameter
vectors in $K$ nonlinear regions in which the original problem is approximated
by using a smaller set of variables. For the special case of parametric
constrained least-squares problems that arise from model predictive control
(MPC) formulations, we propose a novel and very efficient QR factorization
method for equality constraint elimination. Together with SVD or $K$-SVD, the
method provides a numerically robust alternative to standard condensing and
move blocking, and to other complexity reduction methods for MPC based on basis
functions. We show the good performance of the proposed techniques in numerical
tests and in a linearized MPC problem of a nonlinear benchmark process.
Related papers
- Variable Substitution and Bilinear Programming for Aligning Partially Overlapping Point Sets [48.1015832267945]
This research presents a method to meet requirements through the minimization objective function of the RPM algorithm.
A branch-and-bound (BnB) algorithm is devised, which solely branches over the parameters, thereby boosting convergence rate.
Empirical evaluations demonstrate better robustness of the proposed methodology against non-rigid deformation, positional noise, and outliers, when compared with prevailing state-of-the-art transformations.
arXiv Detail & Related papers (2024-05-14T13:28:57Z) - Numerical Optimizations for Weighted Low-rank Estimation on Language
Model [73.12941276331316]
Singular value decomposition (SVD) is one of the most popular compression methods that approximates a target matrix with smaller matrices.
Standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption.
We show that our method can perform better than current SOTA methods in neural-based language models.
arXiv Detail & Related papers (2022-11-02T00:58:02Z) - A Novel Maximum-Entropy-Driven Technique for Low-Rank Orthogonal
Nonnegative Matrix Factorization with $\ell_0$-Norm sparsity Constraint [0.0]
In data-driven control and machine learning, a common requirement involves breaking down large matrices into smaller, low-rank factors.
This paper introduces an innovative solution to the orthogonal nonnegative matrix factorization (ONMF) problem.
The proposed method achieves comparable or improved reconstruction errors in line with the literature.
arXiv Detail & Related papers (2022-10-06T04:30:59Z) - Partial Least Square Regression via Three-factor SVD-type Manifold
Optimization for EEG Decoding [4.0204191666595595]
We propose a new method to solve the partial least square regression, named PLSR via optimization on bi-Grassmann manifold (PLSRbiGr)
qlPLSRbiGr is validated with a variety of experiments for decoding EEG signals at motor imagery (MI) and steady-state visual evoked potential (SSVEP) task.
arXiv Detail & Related papers (2022-08-09T11:57:02Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - MARS via LASSO [1.5199437137239338]
We propose and study a natural variant of the MARS method.
Our method is based on at least squares estimation over a convex class of functions.
Our estimator can be computed via finite-dimensional convex optimization.
arXiv Detail & Related papers (2021-11-23T07:30:33Z) - Alternating Direction Method of Multipliers for Quantization [15.62692130672419]
We study the performance of the Alternating Direction Method of Multipliers for Quantization ($texttADMM-Q$) algorithm.
We develop a few variants of $texttADMM-Q$ that can handle inexact update rules.
We empirically evaluate the efficacy of our proposed approaches.
arXiv Detail & Related papers (2020-09-08T01:58:02Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.