Maximizing Determinants under Matroid Constraints
- URL: http://arxiv.org/abs/2004.07886v1
- Date: Thu, 16 Apr 2020 19:16:38 GMT
- Title: Maximizing Determinants under Matroid Constraints
- Authors: Vivek Madan, Aleksandar Nikolov, Mohit Singh, Uthaipon Tantipongpipat
- Abstract summary: We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
- Score: 69.25768526213689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given vectors $v_1,\dots,v_n\in\mathbb{R}^d$ and a matroid $M=([n],I)$, we
study the problem of finding a basis $S$ of $M$ such that $\det(\sum_{i \in
S}v_i v_i^\top)$ is maximized. This problem appears in a diverse set of areas
such as experimental design, fair allocation of goods, network design, and
machine learning. The current best results include an $e^{2k}$-estimation for
any matroid of rank $k$ and a $(1+\epsilon)^d$-approximation for a uniform
matroid of rank $k\ge d+\frac d\epsilon$, where the rank $k\ge d$ denotes the
desired size of the optimal set. Our main result is a new approximation
algorithm with an approximation guarantee that depends only on the dimension
$d$ of the vectors and not on the size $k$ of the output set. In particular, we
show an $(O(d))^{d}$-estimation and an $(O(d))^{d^3}$-approximation for any
matroid, giving a significant improvement over prior work when $k\gg d$.
Our result relies on the existence of an optimal solution to a convex
programming relaxation for the problem which has sparse support; in particular,
no more than $O(d^2)$ variables of the solution have fractional values. The
sparsity results rely on the interplay between the first-order optimality
conditions for the convex program and matroid theory. We believe that the
techniques introduced to show sparsity of optimal solutions to convex programs
will be of independent interest. We also give a randomized algorithm that
rounds a sparse fractional solution to a feasible integral solution to the
original problem. To show the approximation guarantee, we utilize recent works
on strongly log-concave polynomials and show new relationships between
different convex programs studied for the problem. Finally, we use the
estimation algorithm and sparsity results to give an efficient deterministic
approximation algorithm with an approximation guarantee that depends solely on
the dimension $d$.
Related papers
- Combinatorial optimization of the coefficient of determination [0.0]
We develop an efficient algorithm for selecting the $k$- subset of $n$ points in the plane with the highest coefficient of determination.
We experimentally demonstrate our method's optimality over several million trials up to $n=30$ without error.
arXiv Detail & Related papers (2024-10-12T00:53:25Z) - Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Dueling Optimization with a Monotone Adversary [35.850072415395395]
We study the problem of dueling optimization with a monotone adversary, which is a generalization of (noiseless) dueling convex optimization.
The goal is to design an online algorithm to find a minimizer $mathbfx*$ for a function $fcolon X to mathbbRd.
arXiv Detail & Related papers (2023-11-18T23:55:59Z) - Composable Coresets for Determinant Maximization: Greedy is Almost
Optimal [2.849878266188877]
Given a set of $n$ in $mathbbRd$, the goal of the emphdeterminant problem is to pick $k$ vectors with the maximum volume.
We show that the widely-used Greedy algorithm also provides composable coresets with an almost optimal approximation factor of $O(k)3k$.
arXiv Detail & Related papers (2023-09-26T21:46:44Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Optimal Gradient Sliding and its Application to Distributed Optimization
Under Similarity [121.83085611327654]
We structured convex optimization problems with additive objective $r:=p + q$, where $r$ is $mu$-strong convex similarity.
We proposed a method to solve problems master to agents' communication and local calls.
The proposed method is much sharper than the $mathcalO(sqrtL_q/mu)$ method.
arXiv Detail & Related papers (2022-05-30T14:28:02Z) - Near-Optimal Regret Bounds for Contextual Combinatorial Semi-Bandits
with Linear Payoff Functions [53.77572276969548]
We show that the C$2$UCB algorithm has the optimal regret bound $tildeO(dsqrtkT + dk)$ for the partition matroid constraints.
For general constraints, we propose an algorithm that modifies the reward estimates of arms in the C$2$UCB algorithm.
arXiv Detail & Related papers (2021-01-20T04:29:18Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.