Fast and Near-Optimal Diagonal Preconditioning
- URL: http://arxiv.org/abs/2008.01722v2
- Date: Wed, 3 Nov 2021 07:15:45 GMT
- Title: Fast and Near-Optimal Diagonal Preconditioning
- Authors: Arun Jambulapati, Jerry Li, Christopher Musco, Aaron Sidford, Kevin
Tian
- Abstract summary: We show how to best improve $mathbfA$'s condition number by left or right diagonal rescaling.
We give a solver for structured mixed packing and covering semidefinite programs which computes a constant-factor optimal scaling for $mathbfA$ in $widetildeO(textnnz(mathbfA) cdot textpoly(kappastar))$ time.
- Score: 46.240079312553796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The convergence rates of iterative methods for solving a linear system
$\mathbf{A} x = b$ typically depend on the condition number of the matrix
$\mathbf{A}$. Preconditioning is a common way of speeding up these methods by
reducing that condition number in a computationally inexpensive way. In this
paper, we revisit the decades-old problem of how to best improve $\mathbf{A}$'s
condition number by left or right diagonal rescaling. We make progress on this
problem in several directions.
First, we provide new bounds for the classic heuristic of scaling
$\mathbf{A}$ by its diagonal values (a.k.a. Jacobi preconditioning). We prove
that this approach reduces $\mathbf{A}$'s condition number to within a
quadratic factor of the best possible scaling. Second, we give a solver for
structured mixed packing and covering semidefinite programs (MPC SDPs) which
computes a constant-factor optimal scaling for $\mathbf{A}$ in
$\widetilde{O}(\text{nnz}(\mathbf{A}) \cdot \text{poly}(\kappa^\star))$ time;
this matches the cost of solving the linear system after scaling up to a
$\widetilde{O}(\text{poly}(\kappa^\star))$ factor. Third, we demonstrate that a
sufficiently general width-independent MPC SDP solver would imply near-optimal
runtimes for the scaling problems we consider, and natural variants concerned
with measures of average conditioning.
Finally, we highlight connections of our preconditioning techniques to
semi-random noise models, as well as applications in reducing risk in several
statistical regression models.
Related papers
- Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Faster Linear Systems and Matrix Norm Approximation via Multi-level Sketched Preconditioning [10.690769339903941]
We present a new class of preconditioned iterative methods for solving linear systems of the form $Ax = b$.
Our methods are based on constructing a low-rank Nystr"om approximation to $A$ using sparse random sketching.
We prove that the convergence of our methods depends on a natural average condition number of $A$, which improves as the rank of the Nystr"om approximation.
arXiv Detail & Related papers (2024-05-09T15:53:43Z) - Structured Semidefinite Programming for Recovering Structured
Preconditioners [41.28701750733703]
We give an algorithm which, given positive definite $mathbfK in mathbbRd times d$ with $mathrmnnz(mathbfK)$ nonzero entries, computes an $epsilon$-optimal diagonal preconditioner in time.
We attain our results via new algorithms for a class of semidefinite programs we call matrix-dictionary approximation SDPs.
arXiv Detail & Related papers (2023-10-27T16:54:29Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Near Optimal Heteroscedastic Regression with Symbiotic Learning [29.16456701187538]
We consider the problem of heteroscedastic linear regression.
We can estimate $mathbfw*$ in squared norm up to an error of $tildeOleft(|mathbff*|2cdot left(frac1n + left(dnright)2right)$ and prove a matching lower bound.
arXiv Detail & Related papers (2023-06-25T16:32:00Z) - Private Isotonic Regression [54.32252900997422]
We consider the problem of isotonic regression over a partially ordered set (poset) $mathcalX$ and for any Lipschitz loss function.
We obtain a pure-DP algorithm that has an expected excess empirical risk of roughly $mathrmwidth(mathcalX) cdot log|mathcalX| / n$, where $mathrmwidth(mathcalX)$ is the width of the poset.
We show that the bounds above are essentially the best that can be
arXiv Detail & Related papers (2022-10-27T05:08:07Z) - Decomposable Non-Smooth Convex Optimization with Nearly-Linear Gradient
Oracle Complexity [15.18055488087588]
We give an algorithm that minimizes the above convex formulation to $epsilon$-accuracy in $widetildeO(sum_i=1n d_i log (1 /epsilon))$ gradient computations.
Our main technical contribution is an adaptive procedure to select an $f_i$ term at every iteration via a novel combination of cutting-plane and interior-point methods.
arXiv Detail & Related papers (2022-08-07T20:53:42Z) - Optimal Gradient Sliding and its Application to Distributed Optimization
Under Similarity [121.83085611327654]
We structured convex optimization problems with additive objective $r:=p + q$, where $r$ is $mu$-strong convex similarity.
We proposed a method to solve problems master to agents' communication and local calls.
The proposed method is much sharper than the $mathcalO(sqrtL_q/mu)$ method.
arXiv Detail & Related papers (2022-05-30T14:28:02Z) - Coordinate Methods for Matrix Games [41.821881312775496]
We develop primal-dual coordinate methods for solving bilinear saddle-point problems of the form $min_x in mathcalX max_yinmathY ytop A x$.
Our methods push existing sublinear methods towards their limits in terms of per-iteration complexity and sample complexity.
arXiv Detail & Related papers (2020-09-17T17:55:03Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.