FREDE: Linear-Space Anytime Graph Embeddings
- URL: http://arxiv.org/abs/2006.04746v1
- Date: Mon, 8 Jun 2020 16:51:24 GMT
- Title: FREDE: Linear-Space Anytime Graph Embeddings
- Authors: Anton Tsitsulin, Marina Munkhoeva, Davide Mottin, Panagiotis Karras,
Ivan Oseledets, Emmanuel M\"uller
- Abstract summary: Low-dimensional representations, or embeddings, of a graph's nodes facilitate data mining tasks.
FREquent Directions Embedding is a sketching-based method that iteratively improves on quality while processing rows of the similarity matrix individually.
Our evaluation on variably sized networks shows that FREDE performs as well as SVD and competitively against current state-of-the-art methods in diverse data mining tasks.
- Score: 12.53022591889574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-dimensional representations, or embeddings, of a graph's nodes facilitate
data mining tasks. Known embedding methods explicitly or implicitly rely on a
similarity measure among nodes. As the similarity matrix is quadratic, a
tradeoff between space complexity and embedding quality arises; past research
initially opted for heuristics and linear-transform factorizations, which allow
for linear space but compromise on quality; recent research has proposed a
quadratic-space solution as a viable option too.
In this paper we observe that embedding methods effectively aim to preserve
the covariance among the rows of a similarity matrix, and raise the question:
is there a method that combines (i) linear space complexity, (ii) a nonlinear
transform as its basis, and (iii) nontrivial quality guarantees? We answer this
question in the affirmative, with FREDE(FREquent Directions Embedding), a
sketching-based method that iteratively improves on quality while processing
rows of the similarity matrix individually; thereby, it provides, at any
iteration, column-covariance approximation guarantees that are, in due course,
almost indistinguishable from those of the optimal row-covariance approximation
by SVD. Our experimental evaluation on variably sized networks shows that FREDE
performs as well as SVD and competitively against current state-of-the-art
methods in diverse data mining tasks, even when it derives an embedding based
on only 10% of node similarities.
Related papers
- Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Diff-PCR: Diffusion-Based Correspondence Searching in Doubly Stochastic
Matrix Space for Point Cloud Registration [35.82753072083472]
State-of-the-art methods have employed RAFT-like iterative updates to refine the solution.
We propose a novel approach that exploits the Denoising Diffusion Model to predict a searching for the optimal matching matrix.
Our method offers flexibility by allowing the search to start from any initial matching matrix provided by the online backbone or white noise.
arXiv Detail & Related papers (2023-12-31T09:24:28Z) - Recovering Simultaneously Structured Data via Non-Convex Iteratively
Reweighted Least Squares [0.8702432681310401]
We propose a new algorithm for recovering data that adheres to multiple, heterogeneous low-dimensional structures from linear observations.
We show that the IRLS method favorable in identifying low/comckuele state measurements.
arXiv Detail & Related papers (2023-06-08T06:35:47Z) - Inverse Kernel Decomposition [3.066967635405937]
We propose a novel nonlinear dimensionality reduction method -- Inverse Kernel Decomposition (IKD)
IKD is based on an eigen-decomposition of the sample covariance matrix of data.
We use synthetic datasets and four real-world datasets to show that IKD is a better dimensionality reduction method than other eigen-decomposition-based methods.
arXiv Detail & Related papers (2022-11-11T02:14:29Z) - MARS via LASSO [1.5199437137239338]
We propose and study a natural variant of the MARS method.
Our method is based on at least squares estimation over a convex class of functions.
Our estimator can be computed via finite-dimensional convex optimization.
arXiv Detail & Related papers (2021-11-23T07:30:33Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Free Energy Node Embedding via Generalized Skip-gram with Negative
Sampling [34.12821919995092]
We propose improvements in two steps of the unsupervised node embedding framework.
On the one hand, we propose to encode node similarities based on the free energy distance, which interpolates between the shortest path and the commute time distances.
On the other hand, we propose a matrix factorization method based on a loss function that generalizes to arbitrary similarity matrices.
arXiv Detail & Related papers (2021-05-19T14:58:13Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Random extrapolation for primal-dual coordinate descent [61.55967255151027]
We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function.
We show almost sure convergence of the sequence and optimal sublinear convergence rates for the primal-dual gap and objective values, in the general convex-concave case.
arXiv Detail & Related papers (2020-07-13T17:39:35Z) - Multi-Objective Matrix Normalization for Fine-grained Visual Recognition [153.49014114484424]
Bilinear pooling achieves great success in fine-grained visual recognition (FGVC)
Recent methods have shown that the matrix power normalization can stabilize the second-order information in bilinear features.
We propose an efficient Multi-Objective Matrix Normalization (MOMN) method that can simultaneously normalize a bilinear representation.
arXiv Detail & Related papers (2020-03-30T08:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.