Simultaenous Sieves: A Deterministic Streaming Algorithm for
Non-Monotone Submodular Maximization
- URL: http://arxiv.org/abs/2010.14367v3
- Date: Mon, 2 Nov 2020 15:35:57 GMT
- Title: Simultaenous Sieves: A Deterministic Streaming Algorithm for
Non-Monotone Submodular Maximization
- Authors: Alan Kuhnle
- Abstract summary: We present a deterministic, single-pass streaming algorithm for the problem of maximizing a submodular function, not necessarily a monotone, with respect to a cardinality constraint.
For a monotone, single-pass streaming algorithm, our algorithm achieves in time an improvement of the best approximation factor from $1/9$ of previous literature to $approx 0.2689$.
- Score: 16.346069386394703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present a combinatorial, deterministic single-pass streaming
algorithm for the problem of maximizing a submodular function, not necessarily
monotone, with respect to a cardinality constraint (SMCC). In the case the
function is monotone, our algorithm reduces to the optimal streaming algorithm
of Badanidiyuru et al. (2014). In general, our algorithm achieves ratio $\alpha
/ (1 + \alpha) - \varepsilon$, for any $\varepsilon > 0$, where $\alpha$ is the
ratio of an offline (deterministic) algorithm for SMCC used for
post-processing. Thus, if exponential computation time is allowed, our
algorithm deterministically achieves nearly the optimal $1/2$ ratio. These
results nearly match those of a recently proposed, randomized streaming
algorithm that achieves the same ratios in expectation. For a deterministic,
single-pass streaming algorithm, our algorithm achieves in polynomial time an
improvement of the best approximation factor from $1/9$ of previous literature
to $\approx 0.2689$.
Related papers
- Discretely Beyond $1/e$: Guided Combinatorial Algorithms for Submodular Maximization [13.86054078646307]
For constrained, not necessarily monotone submodular, all known approximation algorithms with ratio greater than $1/e$ require continuous ideas.
For algorithms, the best known approximation ratios for both size and matroid constraint are obtained by a simple randomized greedy algorithm.
arXiv Detail & Related papers (2024-05-08T16:39:59Z) - Fast algorithms for k-submodular maximization subject to a matroid
constraint [10.270420338235237]
We apply a Threshold-Decreasing Algorithm to maximize $k$-submodular functions under a matroid constraint.
We give a $(frac12 - epsilon)$-approximation algorithm for $k$-submodular function.
arXiv Detail & Related papers (2023-07-26T07:08:03Z) - Deterministic Nonsmooth Nonconvex Optimization [94.01526844386977]
We show that randomization is necessary to obtain a dimension-free dimension-free algorithm.
Our algorithm yields the first deterministic dimension-free algorithm for optimizing ReLU networks.
arXiv Detail & Related papers (2023-02-16T13:57:19Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - Quick Streaming Algorithms for Maximization of Monotone Submodular
Functions in Linear Time [16.346069386394703]
We consider the problem of monotone, submodular over a ground set of size $n$ subject to cardinality constraint $k$.
We introduce the first deterministic algorithms with linear time complexity; these algorithms are streaming algorithms.
Our single-pass algorithm obtains a constant ratio in $lceil n / c rceil + c$ oracle queries, for any $c ge 1$.
arXiv Detail & Related papers (2020-09-10T16:35:54Z) - Practical and Parallelizable Algorithms for Non-Monotone Submodular
Maximization with Size Constraint [20.104148319012854]
We present and parallelizable for a submodular function, not necessarily a monotone, with respect to a size constraint.
We improve the best approximation factor achieved by an algorithm that has optimal adaptivity and nearly optimal complexity query to $0.193 - varepsilon$.
arXiv Detail & Related papers (2020-09-03T22:43:55Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z) - Linear-Time Algorithms for Adaptive Submodular Maximization [17.19443570570189]
First, we develop a well-studied adaptive submodular problem subject to a cardinality constraint.
Second, we introduce the concept of fully adaptive submodularity.
Our algorithm achieves a $frac1-1/e-epsilon4-2/e-2epsilon$ approximation ratio using only $O(nlogfrac1epsilon)$ number of function evaluations.
arXiv Detail & Related papers (2020-07-08T15:54:28Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.