Decomposable Submodular Function Minimization via Maximum Flow
- URL: http://arxiv.org/abs/2103.03868v1
- Date: Fri, 5 Mar 2021 18:46:38 GMT
- Title: Decomposable Submodular Function Minimization via Maximum Flow
- Authors: Kyriakos Axiotis, Adam Karczmarz, Anish Mukherjee, Piotr Sankowski,
Adrian Vladu
- Abstract summary: This paper bridges discrete and continuous optimization approaches for decomposable submodular function minimization.
We provide improved running times for this problem by reducing it to a number of calls to maximum flow oracle.
- Score: 6.993741009069205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper bridges discrete and continuous optimization approaches for
decomposable submodular function minimization, in both the standard and
parametric settings.
We provide improved running times for this problem by reducing it to a number
of calls to a maximum flow oracle. When each function in the decomposition acts
on $O(1)$ elements of the ground set $V$ and is polynomially bounded, our
running time is up to polylogarithmic factors equal to that of solving maximum
flow in a sparse graph with $O(\vert V \vert)$ vertices and polynomial integral
capacities.
We achieve this by providing a simple iterative method which can optimize to
high precision any convex function defined on the submodular base polytope,
provided we can efficiently minimize it on the base polytope corresponding to
the cut function of a certain graph that we construct. We solve this
minimization problem by lifting the solutions of a parametric cut problem,
which we obtain via a new efficient combinatorial reduction to maximum flow.
This reduction is of independent interest and implies some previously unknown
bounds for the parametric minimum $s,t$-cut problem in multiple settings.
Related papers
- Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - An Accelerated Gradient Method for Convex Smooth Simple Bilevel Optimization [16.709026203727007]
We present a novel bilevel optimization method that locally approximates the solution set of the lower-level problem.
We measure the performance of our method in terms of suboptimality and infeasibility errors.
arXiv Detail & Related papers (2024-02-12T22:34:53Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe Algorithm under Parallelization [7.197233473373693]
We consider the problem of minimizing the sum of two convex functions.
One has Lipschitz-continuous gradients, and can be accessed via oracles, whereas the other is "simple"
We show that one can achieve an $epsilon$ primaldual gap (in expectation) in $tildeO (1/ sqrtepsilon)$ iterations.
arXiv Detail & Related papers (2022-05-25T13:01:09Z) - Parallel Quasi-concave set optimization: A new frontier that scales
without needing submodularity [14.93584434176082]
Class of quasi-concave set functions induced as a dual class to monotone linkage functions.
We show a potential for widespread applications via an example of diverse feature subset selection with exact global maxi-min guarantees.
arXiv Detail & Related papers (2021-08-19T15:50:41Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - Sparse Approximate Solutions to Max-Plus Equations with Application to
Multivariate Convex Regression [34.99564569478268]
We show how one can obtain such solutions efficiently and in minimum time for any $ell_p$ approximation error.
We propose a novel method for piecewise fitting of convex functions, with optimality guarantees and an approximately sparse affine regions.
arXiv Detail & Related papers (2020-11-06T15:17:00Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Submodular Maximization Through Barrier Functions [32.41824379833395]
We introduce a novel technique for constrained submodular, inspired by barrier functions in continuous optimization.
We extensively evaluate our proposed algorithm over several real-world applications.
arXiv Detail & Related papers (2020-02-10T03:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.