Minimum Cost Adaptive Submodular Cover
- URL: http://arxiv.org/abs/2208.08351v2
- Date: Tue, 21 May 2024 20:53:18 GMT
- Title: Minimum Cost Adaptive Submodular Cover
- Authors: Hessa Al-Thani, Yubing Cui, Viswanath Nagarajan,
- Abstract summary: We consider the problem of minimum cost cover of adaptive-submodular functions.
We show that our algorithm simultaneously achieves a $(p+1)p+1cdot (ln Q+1)p$ approximation guarantee for all $pge 1$.
- Score: 4.680686256929023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adaptive submodularity is a fundamental concept in stochastic optimization, with numerous applications such as sensor placement, hypothesis identification and viral marketing. We consider the problem of minimum cost cover of adaptive-submodular functions, and provide a $4(1+\ln Q)$-approximation algorithm, where $Q$ is the goal value. In fact, we consider a significantly more general objective of minimizing the $p^{th}$ moment of the coverage cost, and show that our algorithm simultaneously achieves a $(p+1)^{p+1}\cdot (\ln Q+1)^p$ approximation guarantee for all $p\ge 1$. All our approximation ratios are best possible up to constant factors (assuming $P\ne NP$). Moreover, our results also extend to the setting where one wants to cover {\em multiple} adaptive-submodular functions. Finally, we evaluate the empirical performance of our algorithm on instances of hypothesis identification.
Related papers
- Fair Submodular Cover [18.37610521373708]
We present the study of Fair Submodular Cover (FSC), where given a ground set $U$, a monotone submodular function $f:2UtomathbbR_ge 0$, a threshold $tau$.
We first introduce discrete algorithms for FSC that achieve a bicriteria approximation ratio of $(frac1epsilon, 1-O(epsilon))$.
We then present a continuous algorithm that achieves a $(frac1epsilon, 1-O(epsilon))$-
arXiv Detail & Related papers (2024-07-05T18:37:09Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Partial-Monotone Adaptive Submodular Maximization [19.29174615532181]
We show that a sampling-based policy achieves an approximation ratio $(m+1)/10$ utility function is $m$-adaptive monotone and adaptive submodular.
This leads to improved bounds for many machine learning applications whose utility functions are almost adaptive monotone.
arXiv Detail & Related papers (2022-07-26T12:16:12Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - The Power of Subsampling in Submodular Maximization [51.629656762796564]
We show that this approach leads to optimal/state-of-the-art results despite being much simpler than existing methods.
We empirically demonstrate the effectiveness of our algorithms on video summarization, location summarization, and movie recommendation tasks.
arXiv Detail & Related papers (2021-04-06T20:25:57Z) - Approximation Algorithms for Active Sequential Hypothesis Testing [3.840659854046464]
This paper provides the first approximation algorithms for active sequential hypotheses testing.
We numerically investigate the performance of our algorithms using both synthetic and real data, showing that our algorithms outperform a previously proposed policy.
arXiv Detail & Related papers (2021-03-07T03:49:19Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Improved Sample Complexity for Incremental Autonomous Exploration in
MDPs [132.88757893161699]
We learn the set of $epsilon$-optimal goal-conditioned policies attaining all states that are incrementally reachable within $L$ steps.
DisCo is the first algorithm that can return an $epsilon/c_min$-optimal policy for any cost-sensitive shortest-path problem.
arXiv Detail & Related papers (2020-12-29T14:06:09Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z) - Beyond Pointwise Submodularity: Non-Monotone Adaptive Submodular
Maximization in Linear Time [17.19443570570189]
We study the non-monotone adaptive submodular problem subject to a cardinality constraint.
We show that the adaptive random greedy algorithm achieves a $1/e$ approximation ratio under adaptive submodularity.
We propose a faster algorithm that achieves a $1-1/e-epsilon$ approximation ratio in expectation with $O(nepsilon-2log epsilon-1)$ value oracle queries.
arXiv Detail & Related papers (2020-08-11T21:06:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.