Variational Optimization for the Submodular Maximum Coverage Problem
- URL: http://arxiv.org/abs/2006.05583v1
- Date: Wed, 10 Jun 2020 00:50:25 GMT
- Title: Variational Optimization for the Submodular Maximum Coverage Problem
- Authors: Jian Du, Zhigang Hua, Shuang Yang
- Abstract summary: We provide the first variational approximation for this problem based on the Nemhauser divergence.
We empirically evaluate it on a number of public data sets and several application tasks.
- Score: 11.734438054316147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We examine the \emph{submodular maximum coverage problem} (SMCP), which is
related to a wide range of applications. We provide the first variational
approximation for this problem based on the Nemhauser divergence, and show that
it can be solved efficiently using variational optimization. The algorithm
alternates between two steps: (1) an E step that estimates a variational
parameter to maximize a parameterized \emph{modular} lower bound; and (2) an M
step that updates the solution by solving the local approximate problem. We
provide theoretical analysis on the performance of the proposed approach and
its curvature-dependent approximate factor, and empirically evaluate it on a
number of public data sets and several application tasks.
Related papers
- Consistent Submodular Maximization [27.266085572522847]
maximizing monotone submodular functions under cardinality constraints is a classic optimization task with several applications in data mining and machine learning.
In this paper we study this problem in a dynamic environment with consistency constraints: elements arrive in a streaming fashion and the goal is maintaining a constant approximation to the optimal solution while having a stable solution.
We provide algorithms in this setting with different trade-offs between consistency and approximation quality.
arXiv Detail & Related papers (2024-05-30T11:59:58Z) - Moreau Envelope ADMM for Decentralized Weakly Convex Optimization [55.2289666758254]
This paper proposes a proximal variant of the alternating direction method of multipliers (ADMM) for distributed optimization.
The results of our numerical experiments indicate that our method is faster and more robust than widely-used approaches.
arXiv Detail & Related papers (2023-08-31T14:16:30Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Two-Stage Stochastic Optimization via Primal-Dual Decomposition and Deep
Unrolling [86.85697555068168]
Two-stage algorithmic optimization plays a critical role in various engineering and scientific applications.
There still lack efficient algorithms, especially when the long-term and short-term variables are coupled in the constraints.
We show that PDD-SSCA can achieve superior performance over existing solutions.
arXiv Detail & Related papers (2021-05-05T03:36:00Z) - Sparse Approximate Solutions to Max-Plus Equations with Application to
Multivariate Convex Regression [34.99564569478268]
We show how one can obtain such solutions efficiently and in minimum time for any $ell_p$ approximation error.
We propose a novel method for piecewise fitting of convex functions, with optimality guarantees and an approximately sparse affine regions.
arXiv Detail & Related papers (2020-11-06T15:17:00Z) - Meta-learning based Alternating Minimization Algorithm for Non-convex
Optimization [9.774392581946108]
We propose a novel solution for challenging non-problems of multiple variables.
Our proposed approach is able to achieve effective iterations in cases while other methods would typically fail.
arXiv Detail & Related papers (2020-09-09T10:45:00Z) - On the implementation of a global optimization method for mixed-variable
problems [0.30458514384586394]
The algorithm is based on the radial basis function of Gutmann and the metric response surface method of Regis and Shoemaker.
We propose several modifications aimed at generalizing and improving these two algorithms.
arXiv Detail & Related papers (2020-09-04T13:36:56Z) - Follow the bisector: a simple method for multi-objective optimization [65.83318707752385]
We consider optimization problems, where multiple differentiable losses have to be minimized.
The presented method computes descent direction in every iteration to guarantee equal relative decrease of objective functions.
arXiv Detail & Related papers (2020-07-14T09:50:33Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - GACEM: Generalized Autoregressive Cross Entropy Method for Multi-Modal
Black Box Constraint Satisfaction [69.94831587339539]
We present a modified Cross-Entropy Method (CEM) that uses a masked auto-regressive neural network for modeling uniform distributions over the solution space.
Our algorithm is able to express complicated solution spaces, thus allowing it to track a variety of different solution regions.
arXiv Detail & Related papers (2020-02-17T20:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.