Graph Matching via convex relaxation to the simplex
- URL: http://arxiv.org/abs/2310.20609v1
- Date: Tue, 31 Oct 2023 16:44:26 GMT
- Title: Graph Matching via convex relaxation to the simplex
- Authors: Ernesto Araya Valdivia and Hemant Tyagi
- Abstract summary: This paper addresses the Graph Matching problem, which consists of finding the best alignment between two input graphs.
A common approach to tackle this problem is through convex relaxations of the NP-hard emphQuadratic Assignment Problem (QAP)
We introduce a new convex relaxation onto the unit simplex and develop an efficient mirror descent scheme with closed-form iterations for solving this problem.
- Score: 6.327393762036104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the Graph Matching problem, which consists of finding
the best possible alignment between two input graphs, and has many applications
in computer vision, network deanonymization and protein alignment. A common
approach to tackle this problem is through convex relaxations of the NP-hard
\emph{Quadratic Assignment Problem} (QAP).
Here, we introduce a new convex relaxation onto the unit simplex and develop
an efficient mirror descent scheme with closed-form iterations for solving this
problem. Under the correlated Gaussian Wigner model, we show that the simplex
relaxation admits a unique solution with high probability. In the noiseless
case, this is shown to imply exact recovery of the ground truth permutation.
Additionally, we establish a novel sufficiency condition for the input matrix
in standard greedy rounding methods, which is less restrictive than the
commonly used `diagonal dominance' condition. We use this condition to show
exact one-step recovery of the ground truth (holding almost surely) via the
mirror descent scheme, in the noiseless setting. We also use this condition to
obtain significantly improved conditions for the GRAMPA algorithm [Fan et al.
2019] in the noiseless setting.
Related papers
- Distributed Extra-gradient with Optimal Complexity and Communication
Guarantees [60.571030754252824]
We consider monotone variational inequality (VI) problems in multi-GPU settings where multiple processors/workers/clients have access to local dual vectors.
Extra-gradient, which is a de facto algorithm for monotone VI problems, has not been designed to be communication-efficient.
We propose a quantized generalized extra-gradient (Q-GenX), which is an unbiased and adaptive compression method tailored to solve VIs.
arXiv Detail & Related papers (2023-08-17T21:15:04Z) - Efficient Graph Laplacian Estimation by Proximal Newton [12.05527862797306]
A graph learning problem can be formulated as a maximum likelihood estimation (MLE) of the precision matrix.
We develop a second-order approach to obtain an efficient solver utilizing several algorithmic features.
arXiv Detail & Related papers (2023-02-13T15:13:22Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - WARPd: A linearly convergent first-order method for inverse problems
with approximate sharpness conditions [0.0]
Sharpness conditions directly control the recovery performance of restart schemes for first-order methods.
We provide a first-order method: Weighted, Accelerated and Restarted Primal-dual (WARPd)
Under a generic approximate sharpness condition, WARPd achieves stable linear convergence to the desired vector.
We show how WARPd compares favorably with specialized state-of-the-art methods and is ideally suited for solving large-scale problems.
arXiv Detail & Related papers (2021-10-24T13:19:41Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Lifting the Convex Conjugate in Lagrangian Relaxations: A Tractable
Approach for Continuous Markov Random Fields [53.31927549039624]
We show that a piecewise discretization preserves better contrast from existing discretization problems.
We apply this theory to the problem of matching two images.
arXiv Detail & Related papers (2021-07-13T12:31:06Z) - Bayesian Inference of Random Dot Product Graphs via Conic Programming [1.7188280334580195]
We present a convex cone program to infer the latent probability matrix of a random dot product graph (RDPG)
We also demonstrate that the method recovers latent structure in the Karate Club Graph and synthetic U.S. Senate vote graphs.
arXiv Detail & Related papers (2021-01-06T18:29:37Z) - Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial
Attacks [86.88061841975482]
We study the problem of generating adversarial examples in a black-box setting, where we only have access to a zeroth order oracle.
We use this setting to find fast one-step adversarial attacks, akin to a black-box version of the Fast Gradient Sign Method(FGSM)
We show that the method uses fewer queries and achieves higher attack success rates than the current state of the art.
arXiv Detail & Related papers (2020-10-08T18:36:51Z) - Online and Distribution-Free Robustness: Regression and Contextual
Bandits with Huber Contamination [29.85468294601847]
We revisit two classic high-dimensional online learning problems, namely linear regression and contextual bandits.
We show that our algorithms succeed where conventional methods fail.
arXiv Detail & Related papers (2020-10-08T17:59:05Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.