The Power of Subsampling in Submodular Maximization
- URL: http://arxiv.org/abs/2104.02772v1
- Date: Tue, 6 Apr 2021 20:25:57 GMT
- Title: The Power of Subsampling in Submodular Maximization
- Authors: Christopher Harshaw, Ehsan Kazemi, Moran Feldman, Amin Karbasi
- Abstract summary: We show that this approach leads to optimal/state-of-the-art results despite being much simpler than existing methods.
We empirically demonstrate the effectiveness of our algorithms on video summarization, location summarization, and movie recommendation tasks.
- Score: 51.629656762796564
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose subsampling as a unified algorithmic technique for submodular
maximization in centralized and online settings. The idea is simple:
independently sample elements from the ground set, and use simple combinatorial
techniques (such as greedy or local search) on these sampled elements. We show
that this approach leads to optimal/state-of-the-art results despite being much
simpler than existing methods. In the usual offline setting, we present
SampleGreedy, which obtains a $(p + 2 + o(1))$-approximation for maximizing a
submodular function subject to a $p$-extendible system using $O(n + nk/p)$
evaluation and feasibility queries, where $k$ is the size of the largest
feasible set. The approximation ratio improves to $p+1$ and $p$ for monotone
submodular and linear objectives, respectively. In the streaming setting, we
present SampleStreaming, which obtains a $(4p +2 - o(1))$-approximation for
maximizing a submodular function subject to a $p$-matchoid using $O(k)$ memory
and $O(km/p)$ evaluation and feasibility queries per element, where $m$ is the
number of matroids defining the $p$-matchoid. The approximation ratio improves
to $4p$ for monotone submodular objectives. We empirically demonstrate the
effectiveness of our algorithms on video summarization, location summarization,
and movie recommendation tasks.
Related papers
- Fair Submodular Cover [18.37610521373708]
We present the study of Fair Submodular Cover (FSC), where given a ground set $U$, a monotone submodular function $f:2UtomathbbR_ge 0$, a threshold $tau$.
We first introduce discrete algorithms for FSC that achieve a bicriteria approximation ratio of $(frac1epsilon, 1-O(epsilon))$.
We then present a continuous algorithm that achieves a $(frac1epsilon, 1-O(epsilon))$-
arXiv Detail & Related papers (2024-07-05T18:37:09Z) - Linear Submodular Maximization with Bandit Feedback [7.926559636438099]
We consider developing approximation algorithms for a submodular objective function $f:2UtomathbbR_geq 0$, where $f=sum_i=1dw_iF_i$.
It is assumed that we have value oracle access to the functions $F_i$, but the coefficients $w_i$ are unknown, and $f$ can only be accessed via noisy queries.
We show that our algorithms make vast improvements in terms of sample efficiency compared to algorithms that do not exploit the linear structure of $f
arXiv Detail & Related papers (2024-07-02T18:40:52Z) - Reward-Mixing MDPs with a Few Latent Contexts are Learnable [75.17357040707347]
We consider episodic reinforcement learning in reward-mixing Markov decision processes (RMMDPs)
Our goal is to learn a near-optimal policy that nearly maximizes the $H$ time-step cumulative rewards in such a model.
arXiv Detail & Related papers (2022-10-05T22:52:00Z) - Minimum Cost Adaptive Submodular Cover [4.680686256929023]
We consider the problem of minimum cost cover of adaptive-submodular functions.
We show that our algorithm simultaneously achieves a $(p+1)p+1cdot (ln Q+1)p$ approximation guarantee for all $pge 1$.
arXiv Detail & Related papers (2022-08-17T15:26:47Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - Improved Sample Complexity for Incremental Autonomous Exploration in
MDPs [132.88757893161699]
We learn the set of $epsilon$-optimal goal-conditioned policies attaining all states that are incrementally reachable within $L$ steps.
DisCo is the first algorithm that can return an $epsilon/c_min$-optimal policy for any cost-sensitive shortest-path problem.
arXiv Detail & Related papers (2020-12-29T14:06:09Z) - Regularized Submodular Maximization at Scale [45.914693923126826]
Submodularity is inherently related to the notions of diversity, coverage, and representativeness.
We propose methods for maximizing a regularized submodular function $f = g ell$ expressed as the difference between a determinant submodular function $g$ and a modular function $ell$.
arXiv Detail & Related papers (2020-02-10T02:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.