Practical $0.385$-Approximation for Submodular Maximization Subject to a Cardinality Constraint
- URL: http://arxiv.org/abs/2405.13994v1
- Date: Wed, 22 May 2024 20:56:57 GMT
- Title: Practical $0.385$-Approximation for Submodular Maximization Subject to a Cardinality Constraint
- Authors: Murad Tukan, Loay Mualem, Moran Feldman,
- Abstract summary: Non-monotone constrained submodular subimation plays a crucial role in various machine learning applications.
Existing algorithms often struggle with a trade-off between approximation guarantees and practical efficiency.
We present a novel algorithm that combines a guarantee of $0.385$-approximation with a low and practical query complexity of $O(n+k2)$.
- Score: 18.290666675596984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-monotone constrained submodular maximization plays a crucial role in various machine learning applications. However, existing algorithms often struggle with a trade-off between approximation guarantees and practical efficiency. The current state-of-the-art is a recent $0.401$-approximation algorithm, but its computational complexity makes it highly impractical. The best practical algorithms for the problem only guarantee $1/e$-approximation. In this work, we present a novel algorithm for submodular maximization subject to a cardinality constraint that combines a guarantee of $0.385$-approximation with a low and practical query complexity of $O(n+k^2)$. Furthermore, we evaluate the empirical performance of our algorithm in experiments based on various machine learning applications, including Movie Recommendation, Image Summarization, and more. These experiments demonstrate the efficacy of our approach.
Related papers
- Fairness in Monotone $k$-submodular Maximization: Algorithms and Applications [0.0]
We study the fair $k$submodular problem and develop a $frac13$ approximation with a running time $mathO(knB)$.
We provide approximation guarantees when the $k$-submodular function is not accessible but only can be approximately accessed.
arXiv Detail & Related papers (2024-11-08T04:20:12Z) - Practical Parallel Algorithms for Non-Monotone Submodular Maximization [20.13836086815112]
Submodular has found extensive applications in various domains within the field of artificial intelligence.
One of the parallelizability of a submodular algorithm is its adaptive complexity, which indicates the number of rounds where a number of queries to the objective function can be executed in parallel.
We propose the first algorithm with both provable approximation ratio and sub adaptive complexity for the problem of non-monotone submodepsular subject to a $k$-system.
arXiv Detail & Related papers (2023-08-21T11:48:34Z) - Provably Efficient Reinforcement Learning via Surprise Bound [66.15308700413814]
We propose a provably efficient reinforcement learning algorithm (both computationally and statistically) with general value function approximations.
Our algorithm achieves reasonable regret bounds when applied to both the linear setting and the sparse high-dimensional linear setting.
arXiv Detail & Related papers (2023-02-22T20:21:25Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - Improved Regret for Efficient Online Reinforcement Learning with Linear
Function Approximation [69.0695698566235]
We study reinforcement learning with linear function approximation and adversarially changing cost functions.
We present a computationally efficient policy optimization algorithm for the challenging general setting of unknown dynamics and bandit feedback.
arXiv Detail & Related papers (2023-01-30T17:26:39Z) - Resolving the Approximability of Offline and Online Non-monotone
DR-Submodular Maximization over General Convex Sets [13.23676270963484]
DR-submodular continuous functions have many real-worlds applications in the domains of machine learning, communication systems, research and economics.
We present a time online algorithm matching the $tfrac14 (1 - m)$-the-art offline algorithm.
We also show that our algorithm and Du's (2022) offline are both optimal in a strong sense.
arXiv Detail & Related papers (2022-10-12T07:04:24Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Adaptive Sampling for Fast Constrained Maximization of Submodular
Function [8.619758302080891]
We develop an algorithm with poly-logarithmic adaptivity for non-monotone submodular under general side constraints.
Our algorithm is suitable to maximize a non-monotone submodular function under a $p$-system side constraint.
arXiv Detail & Related papers (2021-02-12T12:38:03Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z) - Reinforcement Learning with General Value Function Approximation:
Provably Efficient Approach via Bounded Eluder Dimension [124.7752517531109]
We establish a provably efficient reinforcement learning algorithm with general value function approximation.
We show that our algorithm achieves a regret bound of $widetildeO(mathrmpoly(dH)sqrtT)$ where $d$ is a complexity measure.
Our theory generalizes recent progress on RL with linear value function approximation and does not make explicit assumptions on the model of the environment.
arXiv Detail & Related papers (2020-05-21T17:36:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.