Maximizing Determinants under Matroid Constraints
- URL: http://arxiv.org/abs/2004.07886v1
- Date: Thu, 16 Apr 2020 19:16:38 GMT
- Title: Maximizing Determinants under Matroid Constraints
- Authors: Vivek Madan, Aleksandar Nikolov, Mohit Singh, Uthaipon Tantipongpipat
- Abstract summary: We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
- Score: 69.25768526213689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given vectors $v_1,\dots,v_n\in\mathbb{R}^d$ and a matroid $M=([n],I)$, we
study the problem of finding a basis $S$ of $M$ such that $\det(\sum_{i \in
S}v_i v_i^\top)$ is maximized. This problem appears in a diverse set of areas
such as experimental design, fair allocation of goods, network design, and
machine learning. The current best results include an $e^{2k}$-estimation for
any matroid of rank $k$ and a $(1+\epsilon)^d$-approximation for a uniform
matroid of rank $k\ge d+\frac d\epsilon$, where the rank $k\ge d$ denotes the
desired size of the optimal set. Our main result is a new approximation
algorithm with an approximation guarantee that depends only on the dimension
$d$ of the vectors and not on the size $k$ of the output set. In particular, we
show an $(O(d))^{d}$-estimation and an $(O(d))^{d^3}$-approximation for any
matroid, giving a significant improvement over prior work when $k\gg d$.
Our result relies on the existence of an optimal solution to a convex
programming relaxation for the problem which has sparse support; in particular,
no more than $O(d^2)$ variables of the solution have fractional values. The
sparsity results rely on the interplay between the first-order optimality
conditions for the convex program and matroid theory. We believe that the
techniques introduced to show sparsity of optimal solutions to convex programs
will be of independent interest. We also give a randomized algorithm that
rounds a sparse fractional solution to a feasible integral solution to the
original problem. To show the approximation guarantee, we utilize recent works
on strongly log-concave polynomials and show new relationships between
different convex programs studied for the problem. Finally, we use the
estimation algorithm and sparsity results to give an efficient deterministic
approximation algorithm with an approximation guarantee that depends solely on
the dimension $d$.
Related papers
- Dueling Optimization with a Monotone Adversary [35.850072415395395]
We study the problem of dueling optimization with a monotone adversary, which is a generalization of (noiseless) dueling convex optimization.
The goal is to design an online algorithm to find a minimizer $mathbfx*$ for a function $fcolon X to mathbbRd.
arXiv Detail & Related papers (2023-11-18T23:55:59Z) - Composable Coresets for Determinant Maximization: Greedy is Almost
Optimal [2.849878266188877]
Given a set of $n$ in $mathbbRd$, the goal of the emphdeterminant problem is to pick $k$ vectors with the maximum volume.
We show that the widely-used Greedy algorithm also provides composable coresets with an almost optimal approximation factor of $O(k)3k$.
arXiv Detail & Related papers (2023-09-26T21:46:44Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Optimal Gradient Sliding and its Application to Distributed Optimization
Under Similarity [121.83085611327654]
We structured convex optimization problems with additive objective $r:=p + q$, where $r$ is $mu$-strong convex similarity.
We proposed a method to solve problems master to agents' communication and local calls.
The proposed method is much sharper than the $mathcalO(sqrtL_q/mu)$ method.
arXiv Detail & Related papers (2022-05-30T14:28:02Z) - Near-Optimal Regret Bounds for Contextual Combinatorial Semi-Bandits
with Linear Payoff Functions [53.77572276969548]
We show that the C$2$UCB algorithm has the optimal regret bound $tildeO(dsqrtkT + dk)$ for the partition matroid constraints.
For general constraints, we propose an algorithm that modifies the reward estimates of arms in the C$2$UCB algorithm.
arXiv Detail & Related papers (2021-01-20T04:29:18Z) - Local Search Algorithms for Rank-Constrained Convex Optimization [7.736462653684946]
We propose greedy and local search algorithms for rank-constrained convex optimization.
We show that if the rank-restricted condition number of $R$ is $kappa$, a solution $A$ with rank $O(r*cdot minkappa log fracR(mathbf0)-R(A*)epsilon, kappa2)$ and $R(A) leq R(A*) + epsilon$ can be recovered, where $A
arXiv Detail & Related papers (2021-01-15T18:52:02Z) - No quantum speedup over gradient descent for non-smooth convex
optimization [22.16973542453584]
Black-box access to a (not necessarily smooth) function $f:mathbbRn to mathbbR$ and its (sub)gradient.
Our goal is to find an $epsilon$-approximate minimum of $f$ starting from a point that is distance at most $R$ from the true minimum.
We show that although the function family used in the lower bound is hard for randomized algorithms, it can be solved using $O(GR/epsilon)$ quantum queries.
arXiv Detail & Related papers (2020-10-05T06:32:47Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.