Fast algorithms for k-submodular maximization subject to a matroid
constraint
- URL: http://arxiv.org/abs/2307.13996v1
- Date: Wed, 26 Jul 2023 07:08:03 GMT
- Title: Fast algorithms for k-submodular maximization subject to a matroid
constraint
- Authors: Shuxian Niu and Qian Liu and Yang Zhou and Min Li
- Abstract summary: We apply a Threshold-Decreasing Algorithm to maximize $k$-submodular functions under a matroid constraint.
We give a $(frac12 - epsilon)$-approximation algorithm for $k$-submodular function.
- Score: 10.270420338235237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we apply a Threshold-Decreasing Algorithm to maximize
$k$-submodular functions under a matroid constraint, which reduces the query
complexity of the algorithm compared to the greedy algorithm with little loss
in approximation ratio. We give a $(\frac{1}{2} - \epsilon)$-approximation
algorithm for monotone $k$-submodular function maximization, and a
$(\frac{1}{3} - \epsilon)$-approximation algorithm for non-monotone case, with
complexity $O(\frac{n(k\cdot EO + IO)}{\epsilon} \log \frac{r}{\epsilon})$,
where $r$ denotes the rank of the matroid, and $IO, EO$ denote the number of
oracles to evaluate whether a subset is an independent set and to compute the
function value of $f$, respectively. Since the constraint of total size can be
looked as a special matroid, called uniform matroid, then we present the fast
algorithm for maximizing $k$-submodular functions subject to a total size
constraint as corollaries. corollaries.
Related papers
- Discretely Beyond $1/e$: Guided Combinatorial Algorithms for Submodular Maximization [13.86054078646307]
For constrained, not necessarily monotone submodular, all known approximation algorithms with ratio greater than $1/e$ require continuous ideas.
For algorithms, the best known approximation ratios for both size and matroid constraint are obtained by a simple randomized greedy algorithm.
arXiv Detail & Related papers (2024-05-08T16:39:59Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Dynamic Algorithms for Matroid Submodular Maximization [11.354502646593607]
Submodular complexity under matroid and cardinality constraints are problems with a wide range of applications in machine learning, auction theory, and optimization.
In this paper, we consider these problems in the dynamic setting, where we have access to a monotone submodular function $f: 2V rightarrow mathbbR+$ and we are given a sequence $calmathS$ of insertions and deletions of elements of an underlying ground set $V$.
We develop the first fully dynamic algorithm for the submodular problem under the matroid constraint
arXiv Detail & Related papers (2023-06-01T17:54:15Z) - Linear Query Approximation Algorithms for Non-monotone Submodular
Maximization under Knapsack Constraint [16.02833173359407]
This work introduces two constant factor approximation algorithms with linear query complexity for non-monotone submodular over a ground set of size $n$ subject to a knapsack constraint.
$mathsfDLA$ is a deterministic algorithm that provides an approximation factor of $6+epsilon$ while $mathsfRLA$ is a randomized algorithm with an approximation factor of $4+epsilon$.
arXiv Detail & Related papers (2023-05-17T15:27:33Z) - Deterministic Nonsmooth Nonconvex Optimization [94.01526844386977]
We show that randomization is necessary to obtain a dimension-free dimension-free algorithm.
Our algorithm yields the first deterministic dimension-free algorithm for optimizing ReLU networks.
arXiv Detail & Related papers (2023-02-16T13:57:19Z) - Practical and Parallelizable Algorithms for Non-Monotone Submodular
Maximization with Size Constraint [20.104148319012854]
We present and parallelizable for a submodular function, not necessarily a monotone, with respect to a size constraint.
We improve the best approximation factor achieved by an algorithm that has optimal adaptivity and nearly optimal complexity query to $0.193 - varepsilon$.
arXiv Detail & Related papers (2020-09-03T22:43:55Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z) - Linear-Time Algorithms for Adaptive Submodular Maximization [17.19443570570189]
First, we develop a well-studied adaptive submodular problem subject to a cardinality constraint.
Second, we introduce the concept of fully adaptive submodularity.
Our algorithm achieves a $frac1-1/e-epsilon4-2/e-2epsilon$ approximation ratio using only $O(nlogfrac1epsilon)$ number of function evaluations.
arXiv Detail & Related papers (2020-07-08T15:54:28Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Continuous Submodular Maximization: Beyond DR-Submodularity [48.04323002262095]
We first prove a simple variant of the vanilla coordinate ascent, called Coordinate-Ascent+.
We then propose Coordinate-Ascent++, that achieves tight $(1-1/e-varepsilon)$-approximation guarantee while performing the same number of iterations.
The computation of each round of Coordinate-Ascent++ can be easily parallelized so that the computational cost per machine scales as $O(n/sqrtvarepsilon+nlog n)$.
arXiv Detail & Related papers (2020-06-21T06:57:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.