Best of Both Worlds: Practical and Theoretically Optimal Submodular Maximization in Parallel
- URL: http://arxiv.org/abs/2111.07917v3
- Date: Mon, 19 Aug 2024 16:31:32 GMT
- Title: Best of Both Worlds: Practical and Theoretically Optimal Submodular Maximization in Parallel
- Authors: Yixin Chen, Tonmoy Dey, Alan Kuhnle,
- Abstract summary: Main algorithm is assembled from two components which may be of independent interest.
A variant of LINEARSEQ is shown to have adaptive complexity of $O(log(n))$ smaller than that of any previous algorithm in the literature.
- Score: 17.462968952951883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For the problem of maximizing a monotone, submodular function with respect to a cardinality constraint $k$ on a ground set of size $n$, we provide an algorithm that achieves the state-of-the-art in both its empirical performance and its theoretical properties, in terms of adaptive complexity, query complexity, and approximation ratio; that is, it obtains, with high probability, query complexity of $O(n)$ in expectation, adaptivity of $O(\log(n))$, and approximation ratio of nearly $1-1/e$. The main algorithm is assembled from two components which may be of independent interest. The first component of our algorithm, LINEARSEQ, is useful as a preprocessing algorithm to improve the query complexity of many algorithms. Moreover, a variant of LINEARSEQ is shown to have adaptive complexity of $O( \log (n / k) )$ which is smaller than that of any previous algorithm in the literature. The second component is a parallelizable thresholding procedure THRESHOLDSEQ for adding elements with gain above a constant threshold. Finally, we demonstrate that our main algorithm empirically outperforms, in terms of runtime, adaptive rounds, total queries, and objective values, the previous state-of-the-art algorithm FAST in a comprehensive evaluation with six submodular objective functions.
Related papers
- Improved Parallel Algorithm for Non-Monotone Submodular Maximization under Knapsack Constraint [0.0]
This work proposes an efficient parallel algorithm for non-monomodular size under a knapsack constraint.
Our algorithm improves the existing parallel one from $8+epsilon$ to $7+epsilon$ with $O(log n)$ adaptive complexity.
arXiv Detail & Related papers (2024-09-06T17:17:52Z) - Practical Parallel Algorithms for Non-Monotone Submodular Maximization [20.13836086815112]
Submodular has found extensive applications in various domains within the field of artificial intelligence.
One of the parallelizability of a submodular algorithm is its adaptive complexity, which indicates the number of rounds where a number of queries to the objective function can be executed in parallel.
We propose the first algorithm with both provable approximation ratio and sub adaptive complexity for the problem of non-monotone submodepsular subject to a $k$-system.
arXiv Detail & Related papers (2023-08-21T11:48:34Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - A Fully Single Loop Algorithm for Bilevel Optimization without Hessian
Inverse [121.54116938140754]
We propose a new Hessian inverse free Fully Single Loop Algorithm for bilevel optimization problems.
We show that our algorithm converges with the rate of $O(epsilon-2)$.
arXiv Detail & Related papers (2021-12-09T02:27:52Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Linear Bandit Algorithms with Sublinear Time Complexity [67.21046514005029]
We propose to accelerate existing linear bandit algorithms to achieve per-step time complexity sublinear in the number of arms $K$.
We show that our proposed algorithms can achieve $O(K1-alpha(T))$ per-step complexity for some $alpha(T) > 0$ and $widetilde O(stT)$ regret, where $T$ is the time horizon.
arXiv Detail & Related papers (2021-03-03T22:42:15Z) - Submodular Maximization subject to a Knapsack Constraint: Combinatorial
Algorithms with Near-optimal Adaptive Complexity [13.416309759182635]
We obtain the first constant approximation for non-monotone submodular algorithms with near-optimal $O(log n)$ adaptive complexity.
Our algorithm asks $tildeO(n2)$ value queries, but can be modified to run with only $tildeO(n)$ instead.
This is also the first approach with sublinear adaptive complexity for the problem and yields comparable to the state-of-the-art even for special cases of cardinality constraints or monotone objectives.
arXiv Detail & Related papers (2021-02-16T18:15:51Z) - Adaptive Sampling for Fast Constrained Maximization of Submodular
Function [8.619758302080891]
We develop an algorithm with poly-logarithmic adaptivity for non-monotone submodular under general side constraints.
Our algorithm is suitable to maximize a non-monotone submodular function under a $p$-system side constraint.
arXiv Detail & Related papers (2021-02-12T12:38:03Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Practical and Parallelizable Algorithms for Non-Monotone Submodular
Maximization with Size Constraint [20.104148319012854]
We present and parallelizable for a submodular function, not necessarily a monotone, with respect to a size constraint.
We improve the best approximation factor achieved by an algorithm that has optimal adaptivity and nearly optimal complexity query to $0.193 - varepsilon$.
arXiv Detail & Related papers (2020-09-03T22:43:55Z) - Refined bounds for algorithm configuration: The knife-edge of dual class
approximability [94.83809668933021]
We investigate how large should a training set be to ensure that a parameter's average metrics performance over the training set is close to its expected, future performance.
We show that if this approximation holds under the L-infinity norm, we can provide strong sample complexity bounds.
We empirically evaluate our bounds in the context of integer programming, one of the most powerful tools in computer science.
arXiv Detail & Related papers (2020-06-21T15:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.