How Do You Want Your Greedy: Simultaneous or Repeated?
- URL: http://arxiv.org/abs/2009.13998v2
- Date: Tue, 13 Jul 2021 18:02:50 GMT
- Title: How Do You Want Your Greedy: Simultaneous or Repeated?
- Authors: Moran Feldman, Christopher Harshaw, Amin Karbasi
- Abstract summary: We present SimultaneousGreedys, a deterministic algorithm for constrained submodular Julia.
We also present SubmodularGreedy, a package which implements these algorithms.
- Score: 41.440943886672855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present SimultaneousGreedys, a deterministic algorithm for constrained
submodular maximization. At a high level, the algorithm maintains $\ell$
solutions and greedily updates them in a simultaneous fashion.
SimultaneousGreedys achieves the tightest known approximation guarantees for
both $k$-extendible systems and the more general $k$-systems, which are
$(k+1)^2/k = k + \mathcal{O}(1)$ and $(1 + \sqrt{k+2})^2 = k +
\mathcal{O}(\sqrt{k})$, respectively. This is in contrast to previous
algorithms, which are designed to provide tight approximation guarantees in one
setting, but not both. We also improve the analysis of RepeatedGreedy, showing
that it achieves an approximation ratio of $k + \mathcal{O}(\sqrt{k})$ for
$k$-systems when allowed to run for $\mathcal{O}(\sqrt{k})$ iterations, an
improvement in both the runtime and approximation over previous analyses. We
demonstrate that both algorithms may be modified to run in nearly linear time
with an arbitrarily small loss in the approximation.
Both SimultaneousGreedys and RepeatedGreedy are flexible enough to
incorporate the intersection of $m$ additional knapsack constraints, while
retaining similar approximation guarantees: both algorithms yield an
approximation guarantee of roughly $k + 2m + \mathcal{O}(\sqrt{k+m})$ for
$k$-systems and SimultaneousGreedys enjoys an improved approximation guarantee
of $k+2m + \mathcal{O}(\sqrt{m})$ for $k$-extendible systems. To complement our
algorithmic contributions, we provide a hardness result which states that no
algorithm making polynomially many oracle queries can achieve an approximation
better than $k + 1/2 + \varepsilon$. We also present SubmodularGreedy.jl, a
Julia package which implements these algorithms and may be downloaded at
https://github.com/crharshaw/SubmodularGreedy.jl . Finally, we test the
effectiveness of these algorithms on real datasets.
Related papers
- Implicit High-Order Moment Tensor Estimation and Learning Latent Variable Models [39.33814194788341]
We study the task of learning latent-variable models.
Motivated by such applications, we develop a general efficient algorithm for implicit moment computation.
By leveraging our general algorithm, we obtain the first-time learners for the following models.
arXiv Detail & Related papers (2024-11-23T23:13:24Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems [11.15373699918747]
We develop two new algorithms to approximate a solution of nonlinear equations in large-scale settings.
We apply our methods to a class of large-scale finite-sum inclusions, which covers prominent applications in machine learning, statistical learning, and network optimization.
arXiv Detail & Related papers (2023-01-08T21:46:27Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Practical and Parallelizable Algorithms for Non-Monotone Submodular
Maximization with Size Constraint [20.104148319012854]
We present and parallelizable for a submodular function, not necessarily a monotone, with respect to a size constraint.
We improve the best approximation factor achieved by an algorithm that has optimal adaptivity and nearly optimal complexity query to $0.193 - varepsilon$.
arXiv Detail & Related papers (2020-09-03T22:43:55Z) - Continuous Submodular Maximization: Beyond DR-Submodularity [48.04323002262095]
We first prove a simple variant of the vanilla coordinate ascent, called Coordinate-Ascent+.
We then propose Coordinate-Ascent++, that achieves tight $(1-1/e-varepsilon)$-approximation guarantee while performing the same number of iterations.
The computation of each round of Coordinate-Ascent++ can be easily parallelized so that the computational cost per machine scales as $O(n/sqrtvarepsilon+nlog n)$.
arXiv Detail & Related papers (2020-06-21T06:57:59Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z) - Learning Sparse Classifiers: Continuous and Mixed Integer Optimization
Perspectives [10.291482850329892]
Mixed integer programming (MIP) can be used to solve (to optimality) $ell_0$-regularized regression problems.
We propose two classes of scalable algorithms: an exact algorithm that can handlepapprox 50,000$ features in a few minutes, and approximate algorithms that can address instances with $papprox6$.
In addition, we present new estimation error bounds for $ell$-regularizeds.
arXiv Detail & Related papers (2020-01-17T18:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.