Bayesian Inference of Random Dot Product Graphs via Conic Programming
- URL: http://arxiv.org/abs/2101.02180v1
- Date: Wed, 6 Jan 2021 18:29:37 GMT
- Title: Bayesian Inference of Random Dot Product Graphs via Conic Programming
- Authors: David Wu, David R. Palmer, Daryl R. Deford
- Abstract summary: We present a convex cone program to infer the latent probability matrix of a random dot product graph (RDPG)
We also demonstrate that the method recovers latent structure in the Karate Club Graph and synthetic U.S. Senate vote graphs.
- Score: 1.7188280334580195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a convex cone program to infer the latent probability matrix of a
random dot product graph (RDPG). The optimization problem maximizes the
Bernoulli maximum likelihood function with an added nuclear norm regularization
term. The dual problem has a particularly nice form, related to the well-known
semidefinite program relaxation of the MaxCut problem. Using the primal-dual
optimality conditions, we bound the entries and rank of the primal and dual
solutions. Furthermore, we bound the optimal objective value and prove
asymptotic consistency of the probability estimates of a slightly modified
model under mild technical assumptions. Our experiments on synthetic RDPGs not
only recover natural clusters, but also reveal the underlying low-dimensional
geometry of the original data. We also demonstrate that the method recovers
latent structure in the Karate Club Graph and synthetic U.S. Senate vote graphs
and is scalable to graphs with up to a few hundred nodes.
Related papers
- Maximum a Posteriori Inference for Factor Graphs via Benders' Decomposition [0.38233569758620056]
We present a method for maximum a-posteriori inference in general Bayesian factor models.
We derive MAP estimation algorithms for the Bayesian Gaussian mixture model and latent Dirichlet allocation.
arXiv Detail & Related papers (2024-10-24T19:57:56Z) - Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Graph Matching via convex relaxation to the simplex [5.355990925686151]
This paper addresses the Graph Matching problem, which consists of finding the best alignment between two input graphs.
A common approach to tackle this problem is through convex relaxations of the NP-hard emphQuadratic Assignment Problem (QAP)
We introduce a new convex relaxation onto the unit simplex and develop an efficient mirror descent scheme with closed-form iterations for solving this problem.
arXiv Detail & Related papers (2023-10-31T16:44:26Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - ROOT-SGD: Sharp Nonasymptotics and Near-Optimal Asymptotics in a Single Algorithm [71.13558000599839]
We study the problem of solving strongly convex and smooth unconstrained optimization problems using first-order algorithms.
We devise a novel, referred to as Recursive One-Over-T SGD, based on an easily implementable, averaging of past gradients.
We prove that it simultaneously achieves state-of-the-art performance in both a finite-sample, nonasymptotic sense and an sense.
arXiv Detail & Related papers (2020-08-28T14:46:56Z) - Does the $\ell_1$-norm Learn a Sparse Graph under Laplacian Constrained
Graphical Models? [13.572602792770288]
We consider the problem of learning a graph under the Laplacian constrained Gaussian graphical models.
We show that a large regularization parameter will surprisingly lead to a complete graph, i.e., every edge connected by an edge.
arXiv Detail & Related papers (2020-06-26T12:06:10Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.