Partial-Monotone Adaptive Submodular Maximization
- URL: http://arxiv.org/abs/2207.12840v1
- Date: Tue, 26 Jul 2022 12:16:12 GMT
- Title: Partial-Monotone Adaptive Submodular Maximization
- Authors: Shaojie Tang, Jing Yuan
- Abstract summary: We show that a sampling-based policy achieves an approximation ratio $(m+1)/10$ utility function is $m$-adaptive monotone and adaptive submodular.
This leads to improved bounds for many machine learning applications whose utility functions are almost adaptive monotone.
- Score: 19.29174615532181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many sequential decision making problems, including pool-based active
learning and adaptive viral marketing, can be formulated as an adaptive
submodular maximization problem. Most of existing studies on adaptive
submodular optimization focus on either monotone case or non-monotone case.
Specifically, if the utility function is monotone and adaptive submodular,
\cite{golovin2011adaptive} developed a greedy policy that achieves a $(1-1/e)$
approximation ratio subject to a cardinality constraint. If the utility
function is non-monotone and adaptive submodular, \cite{tang2021beyond} showed
that a random greedy policy achieves a $1/e$ approximation ratio subject to a
cardinality constraint. In this work, we aim to generalize the above mentioned
results by studying the partial-monotone adaptive submodular maximization
problem. To this end, we introduce the notation of adaptive monotonicity ratio
$m\in[0,1]$ to measure the degree of monotonicity of a function. Our main
result is to show that a random greedy policy achieves an approximation ratio
of $m(1-1/e)+(1-m)(1/e)$ if the utility function is $m$-adaptive monotone and
adaptive submodular. Notably this result recovers the aforementioned $(1-1/e)$
and $1/e$ approximation ratios when $m = 0$ and $m = 1$, respectively. We
further extend our results to consider a knapsack constraint. We show that a
sampling-based policy achieves an approximation ratio of $(m+1)/10$ if the
utility function is $m$-adaptive monotone and adaptive submodular. One
important implication of our results is that even for a non-monotone utility
function, we still can achieve an approximation ratio close to $(1-1/e)$ if
this function is ``close'' to a monotone function. This leads to improved
performance bounds for many machine learning applications whose utility
functions are almost adaptive monotone.
Related papers
- Fair Submodular Cover [18.37610521373708]
We present the study of Fair Submodular Cover (FSC), where given a ground set $U$, a monotone submodular function $f:2UtomathbbR_ge 0$, a threshold $tau$.
We first introduce discrete algorithms for FSC that achieve a bicriteria approximation ratio of $(frac1epsilon, 1-O(epsilon))$.
We then present a continuous algorithm that achieves a $(frac1epsilon, 1-O(epsilon))$-
arXiv Detail & Related papers (2024-07-05T18:37:09Z) - Randomized Greedy Learning for Non-monotone Stochastic Submodular
Maximization Under Full-bandit Feedback [98.29086113546045]
We investigate the problem of unconstrained multi-armed bandits with full-bandit feedback and rewards for submodularity.
We show that RGL empirically outperforms other full-bandit variants in submodular and non-submodular settings.
arXiv Detail & Related papers (2023-02-02T18:52:14Z) - Extra-Newton: A First Approach to Noise-Adaptive Accelerated
Second-Order Methods [57.050204432302195]
This work proposes a universal and adaptive second-order method for minimizing second-order smooth, convex functions.
Our algorithm achieves $O(sigma / sqrtT)$ convergence when the oracle feedback is with variance $sigma2$, and improves its convergence to $O( 1 / T3)$ with deterministic oracles.
arXiv Detail & Related papers (2022-11-03T14:12:51Z) - Minimum Cost Adaptive Submodular Cover [4.680686256929023]
We consider the problem of minimum cost cover of adaptive-submodular functions.
We show that our algorithm simultaneously achieves a $(p+1)p+1cdot (ln Q+1)p$ approximation guarantee for all $pge 1$.
arXiv Detail & Related papers (2022-08-17T15:26:47Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - The Power of Subsampling in Submodular Maximization [51.629656762796564]
We show that this approach leads to optimal/state-of-the-art results despite being much simpler than existing methods.
We empirically demonstrate the effectiveness of our algorithms on video summarization, location summarization, and movie recommendation tasks.
arXiv Detail & Related papers (2021-04-06T20:25:57Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z) - Beyond Pointwise Submodularity: Non-Monotone Adaptive Submodular
Maximization in Linear Time [17.19443570570189]
We study the non-monotone adaptive submodular problem subject to a cardinality constraint.
We show that the adaptive random greedy algorithm achieves a $1/e$ approximation ratio under adaptive submodularity.
We propose a faster algorithm that achieves a $1-1/e-epsilon$ approximation ratio in expectation with $O(nepsilon-2log epsilon-1)$ value oracle queries.
arXiv Detail & Related papers (2020-08-11T21:06:52Z) - Regularized Submodular Maximization at Scale [45.914693923126826]
Submodularity is inherently related to the notions of diversity, coverage, and representativeness.
We propose methods for maximizing a regularized submodular function $f = g ell$ expressed as the difference between a determinant submodular function $g$ and a modular function $ell$.
arXiv Detail & Related papers (2020-02-10T02:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.