A Unified Approach to Submodular Maximization Under Noise
- URL: http://arxiv.org/abs/2510.21128v1
- Date: Fri, 24 Oct 2025 03:31:25 GMT
- Title: A Unified Approach to Submodular Maximization Under Noise
- Authors: Kshipra Bhawalkar, Yang Cai, Zhe Feng, Christopher Liaw, Tao Lin,
- Abstract summary: We consider a problem of maximizing a submodular function with access to a noisy value oracle for the function.<n>By using the greedy greedy algorithm, we obtain a $1/2$-approximation for unconstrained (non-monotone) submodular under noise.
- Score: 15.762704657089637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of maximizing a submodular function with access to a noisy value oracle for the function instead of an exact value oracle. Similar to prior work, we assume that the noisy oracle is persistent in that multiple calls to the oracle for a specific set always return the same value. In this model, Hassidim and Singer (2017) design a $(1-1/e)$-approximation algorithm for monotone submodular maximization subject to a cardinality constraint, and Huang et al (2022) design a $(1-1/e)/2$-approximation algorithm for monotone submodular maximization subject to any arbitrary matroid constraint. In this paper, we design a meta-algorithm that allows us to take any "robust" algorithm for exact submodular maximization as a black box and transform it into an algorithm for the noisy setting while retaining the approximation guarantee. By using the meta-algorithm with the measured continuous greedy algorithm, we obtain a $(1-1/e)$-approximation (resp. $1/e$-approximation) for monotone (resp. non-monotone) submodular maximization subject to a matroid constraint under noise. Furthermore, by using the meta-algorithm with the double greedy algorithm, we obtain a $1/2$-approximation for unconstrained (non-monotone) submodular maximization under noise.
Related papers
- Effective Policy Learning for Multi-Agent Online Coordination Beyond Submodular Objectives [64.16056378603875]
We present two policy learning algorithms for multi-agent online coordination problem.<n>The first one, textttMA-SPL, can achieve the optimal $(fracce)$-approximation for the MA-OC problem.<n>The second online algorithm named textttMA-MPL can simultaneously maintain the same approximation ratio.
arXiv Detail & Related papers (2025-09-26T17:16:34Z) - Near-Optimal Online Learning for Multi-Agent Submodular Coordination: Tight Approximation and Communication Efficiency [52.60557300927007]
We present a $textbfMA-OSMA$ algorithm to transfer the discrete submodular problem into a continuous optimization.<n>We also introduce a projection-free $textbfMA-OSEA$ algorithm, which effectively utilizes the KL divergence by mixing a uniform distribution.<n>Our algorithms significantly improve the $(frac11+c)$-approximation provided by the state-of-the-art OSG algorithm.
arXiv Detail & Related papers (2025-02-07T15:57:56Z) - Discretely Beyond $1/e$: Guided Combinatorial Algorithms for Submodular Maximization [13.86054078646307]
For constrained, not necessarily monotone submodular, all known approximation algorithms with ratio greater than $1/e$ require continuous ideas.<n>For algorithms, the best known approximation ratios for both size and matroid constraint are obtained by a simple randomized greedy algorithm.
arXiv Detail & Related papers (2024-05-08T16:39:59Z) - Dynamic Non-monotone Submodular Maximization [11.354502646593607]
We show a reduction from maximizing a non-monotone submodular function under the cardinality constraint $k$ to maximizing a monotone submodular function under the same constraint.
Our algorithms maintain an $(epsilon)$-approximate of the solution and use expected amortized $O(epsilon-3k3log3(n)log(k)$ queries per update.
arXiv Detail & Related papers (2023-11-07T03:20:02Z) - Fast algorithms for k-submodular maximization subject to a matroid
constraint [10.270420338235237]
We apply a Threshold-Decreasing Algorithm to maximize $k$-submodular functions under a matroid constraint.
We give a $(frac12 - epsilon)$-approximation algorithm for $k$-submodular function.
arXiv Detail & Related papers (2023-07-26T07:08:03Z) - Extra-Newton: A First Approach to Noise-Adaptive Accelerated
Second-Order Methods [57.050204432302195]
This work proposes a universal and adaptive second-order method for minimizing second-order smooth, convex functions.
Our algorithm achieves $O(sigma / sqrtT)$ convergence when the oracle feedback is with variance $sigma2$, and improves its convergence to $O( 1 / T3)$ with deterministic oracles.
arXiv Detail & Related papers (2022-11-03T14:12:51Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - The Power of Subsampling in Submodular Maximization [51.629656762796564]
We show that this approach leads to optimal/state-of-the-art results despite being much simpler than existing methods.
We empirically demonstrate the effectiveness of our algorithms on video summarization, location summarization, and movie recommendation tasks.
arXiv Detail & Related papers (2021-04-06T20:25:57Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z) - Beyond Pointwise Submodularity: Non-Monotone Adaptive Submodular
Maximization in Linear Time [17.19443570570189]
We study the non-monotone adaptive submodular problem subject to a cardinality constraint.
We show that the adaptive random greedy algorithm achieves a $1/e$ approximation ratio under adaptive submodularity.
We propose a faster algorithm that achieves a $1-1/e-epsilon$ approximation ratio in expectation with $O(nepsilon-2log epsilon-1)$ value oracle queries.
arXiv Detail & Related papers (2020-08-11T21:06:52Z) - Linear-Time Algorithms for Adaptive Submodular Maximization [17.19443570570189]
First, we develop a well-studied adaptive submodular problem subject to a cardinality constraint.
Second, we introduce the concept of fully adaptive submodularity.
Our algorithm achieves a $frac1-1/e-epsilon4-2/e-2epsilon$ approximation ratio using only $O(nlogfrac1epsilon)$ number of function evaluations.
arXiv Detail & Related papers (2020-07-08T15:54:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.