Beyond Pointwise Submodularity: Non-Monotone Adaptive Submodular
Maximization in Linear Time
- URL: http://arxiv.org/abs/2008.05004v4
- Date: Tue, 15 Dec 2020 18:41:15 GMT
- Title: Beyond Pointwise Submodularity: Non-Monotone Adaptive Submodular
Maximization in Linear Time
- Authors: Shaojie Tang
- Abstract summary: We study the non-monotone adaptive submodular problem subject to a cardinality constraint.
We show that the adaptive random greedy algorithm achieves a $1/e$ approximation ratio under adaptive submodularity.
We propose a faster algorithm that achieves a $1-1/e-epsilon$ approximation ratio in expectation with $O(nepsilon-2log epsilon-1)$ value oracle queries.
- Score: 17.19443570570189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the non-monotone adaptive submodular maximization
problem subject to a cardinality constraint. We first revisit the adaptive
random greedy algorithm proposed in \citep{gotovos2015non}, where they show
that this algorithm achieves a $1/e$ approximation ratio if the objective
function is adaptive submodular and pointwise submodular. It is not clear
whether the same guarantee holds under adaptive submodularity (without
resorting to pointwise submodularity) or not. Our first contribution is to show
that the adaptive random greedy algorithm achieves a $1/e$ approximation ratio
under adaptive submodularity. One limitation of the adaptive random greedy
algorithm is that it requires $O(n\times k)$ value oracle queries, where $n$ is
the size of the ground set and $k$ is the cardinality constraint. Our second
contribution is to develop the first linear-time algorithm for the non-monotone
adaptive submodular maximization problem. Our algorithm achieves a
$1/e-\epsilon$ approximation ratio (this bound is improved to $1-1/e-\epsilon$
for monotone case), using only $O(n\epsilon^{-2}\log \epsilon^{-1})$ value
oracle queries. Notably, $O(n\epsilon^{-2}\log \epsilon^{-1})$ is independent
of the cardinality constraint. For the monotone case, we propose a faster
algorithm that achieves a $1-1/e-\epsilon$ approximation ratio in expectation
with $O(n \log \frac{1}{\epsilon})$ value oracle queries. We also generalize
our study by considering a partition matroid constraint, and develop a
linear-time algorithm for monotone and fully adaptive submodular functions.
Related papers
- Fair Submodular Cover [18.37610521373708]
We present the study of Fair Submodular Cover (FSC), where given a ground set $U$, a monotone submodular function $f:2UtomathbbR_ge 0$, a threshold $tau$.
We first introduce discrete algorithms for FSC that achieve a bicriteria approximation ratio of $(frac1epsilon, 1-O(epsilon))$.
We then present a continuous algorithm that achieves a $(frac1epsilon, 1-O(epsilon))$-
arXiv Detail & Related papers (2024-07-05T18:37:09Z) - Fast algorithms for k-submodular maximization subject to a matroid
constraint [10.270420338235237]
We apply a Threshold-Decreasing Algorithm to maximize $k$-submodular functions under a matroid constraint.
We give a $(frac12 - epsilon)$-approximation algorithm for $k$-submodular function.
arXiv Detail & Related papers (2023-07-26T07:08:03Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Partial-Monotone Adaptive Submodular Maximization [19.29174615532181]
We show that a sampling-based policy achieves an approximation ratio $(m+1)/10$ utility function is $m$-adaptive monotone and adaptive submodular.
This leads to improved bounds for many machine learning applications whose utility functions are almost adaptive monotone.
arXiv Detail & Related papers (2022-07-26T12:16:12Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - The Power of Subsampling in Submodular Maximization [51.629656762796564]
We show that this approach leads to optimal/state-of-the-art results despite being much simpler than existing methods.
We empirically demonstrate the effectiveness of our algorithms on video summarization, location summarization, and movie recommendation tasks.
arXiv Detail & Related papers (2021-04-06T20:25:57Z) - Practical and Parallelizable Algorithms for Non-Monotone Submodular
Maximization with Size Constraint [20.104148319012854]
We present and parallelizable for a submodular function, not necessarily a monotone, with respect to a size constraint.
We improve the best approximation factor achieved by an algorithm that has optimal adaptivity and nearly optimal complexity query to $0.193 - varepsilon$.
arXiv Detail & Related papers (2020-09-03T22:43:55Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z) - Linear-Time Algorithms for Adaptive Submodular Maximization [17.19443570570189]
First, we develop a well-studied adaptive submodular problem subject to a cardinality constraint.
Second, we introduce the concept of fully adaptive submodularity.
Our algorithm achieves a $frac1-1/e-epsilon4-2/e-2epsilon$ approximation ratio using only $O(nlogfrac1epsilon)$ number of function evaluations.
arXiv Detail & Related papers (2020-07-08T15:54:28Z) - Continuous Submodular Maximization: Beyond DR-Submodularity [48.04323002262095]
We first prove a simple variant of the vanilla coordinate ascent, called Coordinate-Ascent+.
We then propose Coordinate-Ascent++, that achieves tight $(1-1/e-varepsilon)$-approximation guarantee while performing the same number of iterations.
The computation of each round of Coordinate-Ascent++ can be easily parallelized so that the computational cost per machine scales as $O(n/sqrtvarepsilon+nlog n)$.
arXiv Detail & Related papers (2020-06-21T06:57:59Z) - Regularized Submodular Maximization at Scale [45.914693923126826]
Submodularity is inherently related to the notions of diversity, coverage, and representativeness.
We propose methods for maximizing a regularized submodular function $f = g ell$ expressed as the difference between a determinant submodular function $g$ and a modular function $ell$.
arXiv Detail & Related papers (2020-02-10T02:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.