Fast Algorithms for Directed Graph Partitioning Using Flows and
Reweighted Eigenvalues
- URL: http://arxiv.org/abs/2306.09128v1
- Date: Thu, 15 Jun 2023 13:41:17 GMT
- Title: Fast Algorithms for Directed Graph Partitioning Using Flows and
Reweighted Eigenvalues
- Authors: Lap Chi Lau, Kam Chuen Tung, Robert Wang
- Abstract summary: We derive almost linear-time algorithms to achieve $O(sqrtlogn)$-approximation and Cheeger-type guarantee for directed edge expansion.
This provides a primal-dual flow-based framework to obtain the best known algorithms for directed graph partitioning.
- Score: 6.094384342913063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a new semidefinite programming relaxation for directed edge
expansion, which is obtained by adding triangle inequalities to the reweighted
eigenvalue formulation. Applying the matrix multiplicative weight update method
to this relaxation, we derive almost linear-time algorithms to achieve
$O(\sqrt{\log{n}})$-approximation and Cheeger-type guarantee for directed edge
expansion, as well as an improved cut-matching game for directed graphs. This
provides a primal-dual flow-based framework to obtain the best known algorithms
for directed graph partitioning. The same approach also works for vertex
expansion and for hypergraphs, providing a simple and unified approach to
achieve the best known results for different expansion problems and different
algorithmic techniques.
Related papers
- Combinatorial Approximations for Cluster Deletion: Simpler, Faster, and Better [18.121514220195607]
Cluster deletion is an NP-hard graph clustering objective with applications in computational and social network analysis.
We first provide a tighter analysis of two previous approximation algorithms, improving their approximation guarantees from 4 to 3.
We show that both algorithms can be derandomized in a surprisingly simple way, by greedily taking a maximum degree in an auxiliary graph and forming a cluster around it.
arXiv Detail & Related papers (2024-04-24T18:39:18Z) - An Alternative Graphical Lasso Algorithm for Precision Matrices [0.0]
We present a new/improved (dual-primal) DP-GLasso algorithm for estimating sparse precision matrices.
We show that the regularized normal log-likelihood naturally decouples into a sum of two easy to minimize convex functions one of which is a Lasso regression problem.
Our algorithm has the precision matrix as its optimization target right at the outset, and retains all the favorable properties of the DP-GLasso algorithm.
arXiv Detail & Related papers (2024-03-19T02:01:01Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Robust empirical risk minimization via Newton's method [9.797319790710711]
A new variant of Newton's method for empirical risk minimization is studied.
The gradient and Hessian of the objective function are replaced by robust estimators.
An algorithm for obtaining robust Newton directions based on the conjugate gradient method is also proposed.
arXiv Detail & Related papers (2023-01-30T18:54:54Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - New Riemannian preconditioned algorithms for tensor completion via
polyadic decomposition [10.620193291237262]
These algorithms exploit a non-Euclidean metric on the product space of the factor matrices of the low-rank tensor in the polyadic decomposition form.
We prove that the proposed gradient descent algorithm globally converges to a stationary point of the tensor completion problem.
Numerical results on synthetic and real-world data suggest that the proposed algorithms are more efficient in memory and time compared to state-of-the-art algorithms.
arXiv Detail & Related papers (2021-01-26T22:11:06Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Alternating minimization algorithms for graph regularized tensor
completion [8.26185178671935]
We consider a Canonical Polyadic (CP) decomposition approach to low-rank tensor completion (LRTC)
The usage of graph regularization entails benefits in the learning accuracy of LRTC, but at the same time, induces coupling graph Laplacian terms.
We propose efficient alternating minimization algorithms by leveraging the block structure of the underlying CP decomposition-based model.
arXiv Detail & Related papers (2020-08-28T23:20:49Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Online Dense Subgraph Discovery via Blurred-Graph Feedback [87.9850024070244]
We introduce a novel learning problem for dense subgraph discovery.
We first propose a edge-time algorithm that obtains a nearly-optimal solution with high probability.
We then design a more scalable algorithm with a theoretical guarantee.
arXiv Detail & Related papers (2020-06-24T11:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.