Review of Serial and Parallel Min-Cut/Max-Flow Algorithms for Computer
Vision
- URL: http://arxiv.org/abs/2202.00418v1
- Date: Tue, 1 Feb 2022 14:06:27 GMT
- Title: Review of Serial and Parallel Min-Cut/Max-Flow Algorithms for Computer
Vision
- Authors: Patrick M. Jensen, Niels Jeppesen, Anders B. Dahl and Vedrana A. Dahl
- Abstract summary: Hochbaum pseudoflow algorithm is fastest serial algorithm, Boykov-Kolmogorov algorithm is most memory efficient.
Existing parallel min-cut/max-flow algorithms can significantly outperform serial algorithms on large problems but suffers from added overhead on small to medium problems.
- Score: 6.574107319036238
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Minimum cut / maximum flow (min-cut/max-flow) algorithms are used to solve a
variety of problems in computer vision and thus significant effort has been put
into developing fast min-cut/max-flow algorithms. This makes it difficult to
choose an optimal algorithm for a given problem - especially for parallel
algorithms, which have not been thoroughly compared. In this paper, we review
the state-of-the-art min-cut/max-flow algorithms for unstructured graphs in
computer vision. We evaluate run time performance and memory use of various
implementations of both serial and parallel algorithms on a set of graph cut
problems. Our results show that the Hochbaum pseudoflow algorithm is the
fastest serial algorithm closely followed by the Excesses Incremental Breadth
First Search algorithm, while the Boykov-Kolmogorov algorithm is the most
memory efficient. The best parallel algorithm is the adaptive bottom-up merging
approach by Liu and Sun. Additionally, we show significant variations in
performance between different implementations the same algorithms highlighting
the importance of low-level implementation details. Finally, we note that
existing parallel min-cut/max-flow algorithms can significantly outperform
serial algorithms on large problems but suffers from added overhead on small to
medium problems. Implementations of all algorithms are available at
https://github.com/patmjen/maxflow_algorithms
Related papers
- GreedyML: A Parallel Algorithm for Maximizing Submodular Functions [2.9998889086656586]
We describe a parallel approximation algorithm for maximizing monotone submodular functions on distributed memory multiprocessors.
Our work is motivated by the need to solve submodular optimization problems on massive data sets, for practical applications in areas such as data summarization, machine learning, and graph sparsification.
arXiv Detail & Related papers (2024-03-15T14:19:09Z) - Efficient and Accurate Optimal Transport with Mirror Descent and
Conjugate Gradients [15.128885770407132]
We design a novel algorithm for optimal transport by drawing from the entropic optimal transport, mirror descent and conjugate gradients literatures.
Our scalable and GPU parallelizable algorithm is able to compute the Wasserstein distance with extreme precision, reaching relative error rates of $10-8$ without numerical stability issues.
arXiv Detail & Related papers (2023-07-17T14:09:43Z) - A Fully Single Loop Algorithm for Bilevel Optimization without Hessian
Inverse [121.54116938140754]
We propose a new Hessian inverse free Fully Single Loop Algorithm for bilevel optimization problems.
We show that our algorithm converges with the rate of $O(epsilon-2)$.
arXiv Detail & Related papers (2021-12-09T02:27:52Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Fast Parallel Algorithms for Euclidean Minimum Spanning Tree and
Hierarchical Spatial Clustering [6.4805900740861]
We introduce a new notion of well-separation to reduce the work and space of our algorithm for HDBSCAN$*$.
We show that our algorithms are theoretically efficient: they have work (number of operations) matching their sequential counterparts, and polylogarithmic depth (parallel time)
Our experiments on large real-world and synthetic data sets using a 48-core machine show that our fastest algorithms outperform the best serial algorithms for the problems by 11.13--55.89x, and existing parallel algorithms by at least an order of magnitude.
arXiv Detail & Related papers (2021-04-02T16:05:00Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Efficient Computation of Expectations under Spanning Tree Distributions [67.71280539312536]
We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models.
Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms.
arXiv Detail & Related papers (2020-08-29T14:58:26Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.