Learning Based Proximity Matrix Factorization for Node Embedding
- URL: http://arxiv.org/abs/2106.05476v1
- Date: Thu, 10 Jun 2021 03:29:15 GMT
- Title: Learning Based Proximity Matrix Factorization for Node Embedding
- Authors: Xingyi Zhang, Kun Xie, Sibo Wang, Zengfeng Huang
- Abstract summary: Recent progress on node embedding shows that proximity matrix factorization methods gain superb performance and scale to large graphs with millions of nodes.
We propose em Lemane, a framework with trainable proximity measures, which can be learned to best suit the datasets and tasks at hand automatically.
Our proposed solution outperforms existing solutions on both link prediction and node classification tasks on almost all datasets.
- Score: 33.18041180501025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Node embedding learns a low-dimensional representation for each node in the
graph. Recent progress on node embedding shows that proximity matrix
factorization methods gain superb performance and scale to large graphs with
millions of nodes. Existing approaches first define a proximity matrix and then
learn the embeddings that fit the proximity by matrix factorization. Most
existing matrix factorization methods adopt the same proximity for different
tasks, while it is observed that different tasks and datasets may require
different proximity, limiting their representation power.
Motivated by this, we propose {\em Lemane}, a framework with trainable
proximity measures, which can be learned to best suit the datasets and tasks at
hand automatically. Our method is end-to-end, which incorporates differentiable
SVD in the pipeline so that the parameters can be trained via backpropagation.
However, this learning process is still expensive on large graphs. To improve
the scalability, we train proximity measures only on carefully subsampled
graphs, and then apply standard proximity matrix factorization on the original
graph using the learned proximity. Note that, computing the learned proximities
for each pair is still expensive for large graphs, and existing techniques for
computing proximities are not applicable to the learned proximities. Thus, we
present generalized push techniques to make our solution scalable to large
graphs with millions of nodes. Extensive experiments show that our proposed
solution outperforms existing solutions on both link prediction and node
classification tasks on almost all datasets.
Related papers
- Scalable Deep Metric Learning on Attributed Graphs [10.092560681589578]
We develop a graph embedding method, which is based on extending deep metric and unbiased contrastive learning techniques.
Based on a multi-classt loss function, we present two algorithms -- DMT for semi-supervised learning and DMAT-i for the unsupervised case.
arXiv Detail & Related papers (2024-11-20T03:34:31Z) - Learning to Approximate Adaptive Kernel Convolution on Graphs [4.434835769977399]
We propose a diffusion learning framework, where the range of feature aggregation is controlled by the scale of a diffusion kernel.
Our model is tested on various standard for node-wise classification for the state-of-the-art datasets performance.
It is also validated on a real-world brain network data for graph classifications to demonstrate its practicality for Alzheimer classification.
arXiv Detail & Related papers (2024-01-22T10:57:11Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Efficiently Learning the Graph for Semi-supervised Learning [4.518012967046983]
We show how to learn the best graphs from the sparse families efficiently using the conjugate gradient method.
Our approach can also be used to learn the graph efficiently online with sub-linear regret, under mild smoothness assumptions.
We implement our approach and demonstrate significant ($sim$10-100x) speedups over prior work on semi-supervised learning with learned graphs on benchmark datasets.
arXiv Detail & Related papers (2023-06-12T13:22:06Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Learning Heuristics for the Maximum Clique Enumeration Problem Using Low
Dimensional Representations [0.0]
We use a learning framework for a pruning process of the input graph towards reducing the clique of the maximum enumeration problem.
We study the role of using different vertex representations on the performance of this runtime method.
We observe that using local graph features in the classification process produce more accurate results when combined with a feature elimination process.
arXiv Detail & Related papers (2022-10-30T22:04:32Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Differentiable Hierarchical Graph Grouping for Multi-Person Pose
Estimation [95.72606536493548]
Multi-person pose estimation is challenging because it localizes body keypoints for multiple persons simultaneously.
We propose a novel differentiable Hierarchical Graph Grouping (HGG) method to learn the graph grouping in bottom-up multi-person pose estimation task.
arXiv Detail & Related papers (2020-07-23T08:46:22Z) - Factorized Graph Representations for Semi-Supervised Learning from
Sparse Data [8.875598257768846]
We show that a method called distant compatibility estimation works even on extremely sparsely labeled graphs.
Our estimator is by orders of magnitude faster than an alternative approach and that the end-to-end classification accuracy is comparable to using gold standard compatibilities.
arXiv Detail & Related papers (2020-03-05T18:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.