Learning Sparse Graph Laplacian with K Eigenvector Prior via Iterative
GLASSO and Projection
- URL: http://arxiv.org/abs/2010.13179v2
- Date: Thu, 18 Feb 2021 15:53:54 GMT
- Title: Learning Sparse Graph Laplacian with K Eigenvector Prior via Iterative
GLASSO and Projection
- Authors: Saghar Bagheri, Gene Cheung, Antonio Ortega, Fen Wang
- Abstract summary: We consider a structural assumption on the graph Laplacian matrix $L$.
The first $K$ eigenvectors of $L$ are pre-selected, e.g., based on domain-specific criteria.
We design an efficient hybrid graphical lasso/projection algorithm to compute the most suitable graph Laplacian matrix $L* in H_u+$ given $barC$.
- Score: 58.5350491065936
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning a suitable graph is an important precursor to many graph signal
processing (GSP) pipelines, such as graph spectral signal compression and
denoising. Previous graph learning algorithms either i) make some assumptions
on connectivity (e.g., graph sparsity), or ii) make simple graph edge
assumptions such as positive edges only. In this paper, given an empirical
covariance matrix $\bar{C}$ computed from data as input, we consider a
structural assumption on the graph Laplacian matrix $L$: the first $K$
eigenvectors of $L$ are pre-selected, e.g., based on domain-specific criteria,
such as computation requirement, and the remaining eigenvectors are then
learned from data. One example use case is image coding, where the first
eigenvector is pre-chosen to be constant, regardless of available observed
data. We first prove that the subspace of symmetric positive semi-definite
(PSD) matrices $H_{u}^+$ with the first $K$ eigenvectors being $\{u_k\}$ in a
defined Hilbert space is a convex cone. We then construct an operator to
project a given positive definite (PD) matrix $L$ to $H_{u}^+$, inspired by the
Gram-Schmidt procedure. Finally, we design an efficient hybrid graphical
lasso/projection algorithm to compute the most suitable graph Laplacian matrix
$L^* \in H_{u}^+$ given $\bar{C}$. Experimental results show that given the
first $K$ eigenvectors as a prior, our algorithm outperforms competing graph
learning schemes using a variety of graph comparison metrics.
Related papers
- Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Graph Unfolding and Sampling for Transitory Video Summarization via Gershgorin Disc Alignment [48.137527345353625]
User-generated videos (UGVs) uploaded from mobile phones to social media sites like YouTube and TikTok are short and non-repetitive.
We summarize a transitory UGV into several discs in linear time via fast graph sampling based on Gershgorin disc alignment (GDA)
We show that our algorithm achieves comparable or better video summarization performance compared to state-of-the-art methods, at a substantially reduced complexity.
arXiv Detail & Related papers (2024-08-03T20:08:02Z) - Compressive Recovery of Sparse Precision Matrices [5.557600489035657]
We consider the problem of learning a graph modeling the statistical relations of the $d$ variables from a dataset with $n$ samples $X in mathbbRn times d$.
We show that it is possible to estimate it from a sketch of size $m=Omegaleft((d+2k)log(d)right)$ where $k$ is the maximal number of edges of the underlying graph.
We investigate the possibility of achieving practical recovery with an iterative algorithm based on the graphical lasso, viewed as a specific denoiser.
arXiv Detail & Related papers (2023-11-08T13:29:08Z) - Efficient Signed Graph Sampling via Balancing & Gershgorin Disc Perfect
Alignment [51.74913666829224]
We show that for datasets with strong inherent anti-correlations, a suitable graph contains both positive and negative edge weights.
We propose a linear-time signed graph sampling method centered on the concept of balanced signed graphs.
Experimental results show that our signed graph sampling method outperformed existing fast sampling schemes noticeably on various datasets.
arXiv Detail & Related papers (2022-08-18T09:19:01Z) - Sign and Basis Invariant Networks for Spectral Graph Representation
Learning [75.18802152811539]
We introduce SignNet and BasisNet -- new neural architectures that are invariant to all requisite symmetries and hence process collections of eigenspaces in a principled manner.
Our networks are theoretically strong for graph representation learning -- they can approximate any spectral graph convolution.
Experiments show the strength of our networks for learning spectral graph filters and learning graph positional encodings.
arXiv Detail & Related papers (2022-02-25T23:11:59Z) - Fast Computation of Generalized Eigenvectors for Manifold Graph
Embedding [38.902986549367434]
We leverage existing fast extreme eigenvector computation algorithms for speedy execution.
Our embedding is among the fastest in the literature, while producing the best clustering performance for manifold graphs.
arXiv Detail & Related papers (2021-12-15T03:45:39Z) - Fast Graph Sampling for Short Video Summarization using Gershgorin Disc
Alignment [52.577757919003844]
We study the problem of efficiently summarizing a short video into several paragraphs, leveraging recent progress in fast graph sampling.
Experimental results show that our algorithm achieves comparable video summarization as state-of-the-art methods, at a substantially reduced complexity.
arXiv Detail & Related papers (2021-10-21T18:43:00Z) - Signed Graph Metric Learning via Gershgorin Disc Perfect Alignment [46.145969174332485]
We propose a fast general metric learning framework that is entirely projection-free.
We replace the PD cone constraint in the metric learning problem with possible linear constraints per distances.
Experiments show that our graph metric optimization is significantly faster than cone-projection schemes.
arXiv Detail & Related papers (2020-06-15T23:15:12Z) - Graph Metric Learning via Gershgorin Disc Alignment [46.145969174332485]
We propose a fast general projection-free metric learning framework, where the objective $min_textbfM in mathcalS$ is a convex differentiable function of the metric matrix $textbfM$.
We prove that the Gershgorin discs can be aligned perfectly using the first eigenvector $textbfv$ of $textbfM$.
Experiments show that our efficiently computed graph metric matrices outperform metrics learned using competing methods in terms of classification tasks.
arXiv Detail & Related papers (2020-01-28T17:44:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.