Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint
- URL: http://arxiv.org/abs/2008.05391v2
- Date: Wed, 13 Jan 2021 15:53:47 GMT
- Title: Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint
- Authors: Jing Tang, Xueyan Tang, Andrew Lim, Kai Han, Chongshou Li, Junsong
Yuan
- Abstract summary: We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
- Score: 75.85952446237599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monotone submodular maximization with a knapsack constraint is NP-hard.
Various approximation algorithms have been devised to address this optimization
problem. In this paper, we revisit the widely known modified greedy algorithm.
First, we show that this algorithm can achieve an approximation factor of
$0.405$, which significantly improves the known factors of $0.357$ given by
Wolsey and $(1-1/\mathrm{e})/2\approx 0.316$ given by Khuller et al. More
importantly, our analysis closes a gap in Khuller et al.'s proof for the
extensively mentioned approximation factor of $(1-1/\sqrt{\mathrm{e}})\approx
0.393$ in the literature to clarify a long-standing misconception on this
issue. Second, we enhance the modified greedy algorithm to derive a
data-dependent upper bound on the optimum. We empirically demonstrate the
tightness of our upper bound with a real-world application. The bound enables
us to obtain a data-dependent ratio typically much higher than $0.405$ between
the solution value of the modified greedy algorithm and the optimum. It can
also be used to significantly improve the efficiency of algorithms such as
branch and bound.
Related papers
- Discretely Beyond $1/e$: Guided Combinatorial Algorithms for Submodular Maximization [13.86054078646307]
For constrained, not necessarily monotone submodular, all known approximation algorithms with ratio greater than $1/e$ require continuous ideas.
For algorithms, the best known approximation ratios for both size and matroid constraint are obtained by a simple randomized greedy algorithm.
arXiv Detail & Related papers (2024-05-08T16:39:59Z) - Robust Approximation Algorithms for Non-monotone $k$-Submodular
Maximization under a Knapsack Constraint [0.0]
Two deterministic approximation algorithms are presented for the problem of non-monotone $k$-submodular complexity under a knapsack constraint.
Our algorithms provide constant approximation ratios within only $O(nk)$ query complexity for the non-monotone objective.
arXiv Detail & Related papers (2023-09-21T12:42:52Z) - Fast algorithms for k-submodular maximization subject to a matroid
constraint [10.270420338235237]
We apply a Threshold-Decreasing Algorithm to maximize $k$-submodular functions under a matroid constraint.
We give a $(frac12 - epsilon)$-approximation algorithm for $k$-submodular function.
arXiv Detail & Related papers (2023-07-26T07:08:03Z) - Tight Bounds for $\gamma$-Regret via the Decision-Estimation Coefficient [88.86699022151598]
We give a statistical characterization of the $gamma$-regret for arbitrary structured bandit problems.
The $gamma$-regret emerges in structured bandit problems over a function class $mathcalF$ where finding an exact optimum of $f in mathcalF$ is intractable.
arXiv Detail & Related papers (2023-03-06T17:54:33Z) - Provably Efficient Reinforcement Learning via Surprise Bound [66.15308700413814]
We propose a provably efficient reinforcement learning algorithm (both computationally and statistically) with general value function approximations.
Our algorithm achieves reasonable regret bounds when applied to both the linear setting and the sparse high-dimensional linear setting.
arXiv Detail & Related papers (2023-02-22T20:21:25Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Refined Regret for Adversarial MDPs with Linear Function Approximation [50.00022394876222]
We consider learning in an adversarial Decision Process (MDP) where the loss functions can change arbitrarily over $K$ episodes.
This paper provides two algorithms that improve the regret to $tildemathcal O(K2/3)$ in the same setting.
arXiv Detail & Related papers (2023-01-30T14:37:21Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Practical and Parallelizable Algorithms for Non-Monotone Submodular
Maximization with Size Constraint [20.104148319012854]
We present and parallelizable for a submodular function, not necessarily a monotone, with respect to a size constraint.
We improve the best approximation factor achieved by an algorithm that has optimal adaptivity and nearly optimal complexity query to $0.193 - varepsilon$.
arXiv Detail & Related papers (2020-09-03T22:43:55Z) - Linear-Time Algorithms for Adaptive Submodular Maximization [17.19443570570189]
First, we develop a well-studied adaptive submodular problem subject to a cardinality constraint.
Second, we introduce the concept of fully adaptive submodularity.
Our algorithm achieves a $frac1-1/e-epsilon4-2/e-2epsilon$ approximation ratio using only $O(nlogfrac1epsilon)$ number of function evaluations.
arXiv Detail & Related papers (2020-07-08T15:54:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.