Approximate Decomposable Submodular Function Minimization for
Cardinality-Based Components
- URL: http://arxiv.org/abs/2110.14859v1
- Date: Thu, 28 Oct 2021 02:36:55 GMT
- Title: Approximate Decomposable Submodular Function Minimization for
Cardinality-Based Components
- Authors: Nate Veldt, Austin R. Benson, Jon Kleinberg
- Abstract summary: Minimizing a sum of simple submodular functions of limited support has numerous applications in machine learning.
We develop fast techniques for instances where components in the sum are cardinality-based, meaning they depend only on the size of the input set.
We develop the first approximation algorithms for this problem, where the approximations can be quickly computed via reduction to a sparse graph cut problem.
- Score: 30.33731479053404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Minimizing a sum of simple submodular functions of limited support is a
special case of general submodular function minimization that has seen numerous
applications in machine learning. We develop fast techniques for instances
where components in the sum are cardinality-based, meaning they depend only on
the size of the input set. This variant is one of the most widely applied in
practice, encompassing, e.g., common energy functions arising in image
segmentation and recent generalized hypergraph cut functions. We develop the
first approximation algorithms for this problem, where the approximations can
be quickly computed via reduction to a sparse graph cut problem, with graph
sparsity controlled by the desired approximation factor. Our method relies on a
new connection between sparse graph reduction techniques and piecewise linear
approximations to concave functions. Our sparse reduction technique leads to
significant improvements in theoretical runtimes, as well as substantial
practical gains in problems ranging from benchmark image segmentation tasks to
hypergraph clustering problems.
Related papers
- Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Learning Sparse Graphs via Majorization-Minimization for Smooth Node
Signals [8.140698535149042]
We propose an algorithm for learning a sparse weighted graph by estimating its adjacency matrix.
We show that the proposed algorithm converges faster, in terms of the average number of iterations, than several existing methods in the literature.
arXiv Detail & Related papers (2022-02-06T17:06:13Z) - Minimax Optimization: The Case of Convex-Submodular [50.03984152441271]
Minimax problems extend beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains.
We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable.
Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization.
arXiv Detail & Related papers (2021-11-01T21:06:35Z) - Decomposable Submodular Function Minimization via Maximum Flow [6.993741009069205]
This paper bridges discrete and continuous optimization approaches for decomposable submodular function minimization.
We provide improved running times for this problem by reducing it to a number of calls to maximum flow oracle.
arXiv Detail & Related papers (2021-03-05T18:46:38Z) - Efficient Consensus Model based on Proximal Gradient Method applied to
Convolutional Sparse Problems [2.335152769484957]
We derive and detail a theoretical analysis of an efficient consensus algorithm based on gradient proximal (PG) approach.
The proposed algorithm is also applied to another particular convolutional problem for the anomaly detection task.
arXiv Detail & Related papers (2020-11-19T20:52:48Z) - Boosting Data Reduction for the Maximum Weight Independent Set Problem
Using Increasing Transformations [59.84561168501493]
We introduce new generalized data reduction and transformation rules for the maximum weight independent set problem.
Surprisingly, these so-called increasing transformations can simplify the problem and also open up the reduction space to yield even smaller irreducible graphs later in the algorithm.
Our algorithm computes significantly smaller irreducible graphs on all except one instance, solves more instances to optimality than previously possible, is up to two orders of magnitude faster than the best state-of-the-art solver, and finds higher-quality solutions than solvers DynWVC and HILS.
arXiv Detail & Related papers (2020-08-12T08:52:50Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Fast Convex Relaxations using Graph Discretizations [13.977100716044102]
Matching and vision problems are fundamentals of computer computation applications.
Applying techniques comes with a significant computational effort reducing their feasibility in practical applications.
This setup allows us to faithfully work on problems constructed by SLIC or Cut-Pursuit.
arXiv Detail & Related papers (2020-04-23T11:14:38Z) - Robust Submodular Minimization with Applications to Cooperative Modeling [0.0]
This paper studies the problem of robust submodular Minimization subject to constraints.
Constrained Submodular Minimization arises in several applications such as co-operative cuts in image segmentation, co-operative matchings in image correspondence, etc.
arXiv Detail & Related papers (2020-01-25T20:40:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.