Extensions of Karger's Algorithm: Why They Fail in Theory and How They
Are Useful in Practice
- URL: http://arxiv.org/abs/2110.02750v1
- Date: Tue, 5 Oct 2021 07:46:28 GMT
- Title: Extensions of Karger's Algorithm: Why They Fail in Theory and How They
Are Useful in Practice
- Authors: Erik Jenner, Enrique Fita Sanmart\'in, Fred A. Hamprecht
- Abstract summary: We study whether Karger's algorithm can be successfully generalized to other cut problems.
We present a simple new algorithm for seeded segmentation / graph-based semi-supervised learning.
The new algorithm has linear runtime and yields a potential that can be interpreted as the posterior probability of a sample belonging to a given seed / class.
- Score: 17.801124377461953
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The minimum graph cut and minimum $s$-$t$-cut problems are important
primitives in the modeling of combinatorial problems in computer science,
including in computer vision and machine learning. Some of the most efficient
algorithms for finding global minimum cuts are randomized algorithms based on
Karger's groundbreaking contraction algorithm. Here, we study whether Karger's
algorithm can be successfully generalized to other cut problems. We first prove
that a wide class of natural generalizations of Karger's algorithm cannot
efficiently solve the $s$-$t$-mincut or the normalized cut problem to
optimality. However, we then present a simple new algorithm for seeded
segmentation / graph-based semi-supervised learning that is closely based on
Karger's original algorithm, showing that for these problems, extensions of
Karger's algorithm can be useful. The new algorithm has linear asymptotic
runtime and yields a potential that can be interpreted as the posterior
probability of a sample belonging to a given seed / class. We clarify its
relation to the random walker algorithm / harmonic energy minimization in terms
of distributions over spanning forests. On classical problems from seeded image
segmentation and graph-based semi-supervised learning on image data, the method
performs at least as well as the random walker / harmonic energy minimization /
Gaussian processes.
Related papers
- More on greedy construction heuristics for the MAX-CUT problem [8.148355685823521]
We show that this picture helps to classify the main greedys for the maximum cut problem.
All versions of the Sahni-Gonzalez(SG) algorithms could be classified as the Prim class.
Various Edge-Contraction(EC) algorithms are of the Kruskal class.
arXiv Detail & Related papers (2023-12-18T02:52:04Z) - Learning Arithmetic Formulas in the Presence of Noise: A General
Framework and Applications to Unsupervised Learning [4.10375234787249]
We present a framework for designing efficient algorithms for unsupervised learning problems.
Our framework is based on a meta algorithm that learns arithmetic circuits in the presence of noise.
arXiv Detail & Related papers (2023-11-13T12:26:25Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Efficient and Local Parallel Random Walks [21.29022677162416]
Random walks are a fundamental primitive used in many machine learning algorithms.
We present a new algorithm that overcomes this limitation by building random walk efficiently and locally.
We show that our technique is both memory and round efficient, and in particular yields an efficient parallel local clustering algorithm.
arXiv Detail & Related papers (2021-12-01T17:06:11Z) - Practical, Provably-Correct Interactive Learning in the Realizable
Setting: The Power of True Believers [12.09273192079783]
We consider interactive learning in the realizable setting and develop a general framework to handle problems ranging from best arm identification to active classification.
We design novel computationally efficient algorithms for the realizable setting that match the minimax lower bound up to logarithmic factors.
arXiv Detail & Related papers (2021-11-09T02:33:36Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Community detection using fast low-cardinality semidefinite programming [94.4878715085334]
We propose a new low-cardinality algorithm that generalizes the local update to maximize a semidefinite relaxation derived from Leiden-k-cut.
This proposed algorithm is scalable, outperforms state-of-the-art algorithms, and outperforms in real-world time with little additional cost.
arXiv Detail & Related papers (2020-12-04T15:46:30Z) - Online Dense Subgraph Discovery via Blurred-Graph Feedback [87.9850024070244]
We introduce a novel learning problem for dense subgraph discovery.
We first propose a edge-time algorithm that obtains a nearly-optimal solution with high probability.
We then design a more scalable algorithm with a theoretical guarantee.
arXiv Detail & Related papers (2020-06-24T11:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.