StarGraph: A Coarse-to-Fine Representation Method for Large-Scale
Knowledge Graph
- URL: http://arxiv.org/abs/2205.14209v1
- Date: Fri, 27 May 2022 19:32:45 GMT
- Title: StarGraph: A Coarse-to-Fine Representation Method for Large-Scale
Knowledge Graph
- Authors: Hongzhu Li, Xiangrui Gao, Yafeng Deng
- Abstract summary: We propose a method named StarGraph, which gives a novel way to utilize the neighborhood information for large-scale knowledge graphs.
The proposed method achieves the best results on the ogbl-wikikg2 dataset, which validates the effectiveness of it.
- Score: 0.6445605125467573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional representation learning algorithms for knowledge graphs (KG) map
each entity to a unique embedding vector, ignoring the rich information
contained in neighbor entities. We propose a method named StarGraph, which
gives a novel way to utilize the neighborhood information for large-scale
knowledge graphs to get better entity representations. The core idea is to
divide the neighborhood information into different levels for sampling and
processing, where the generalized coarse-grained information and unique
fine-grained information are combined to generate an efficient subgraph for
each node. In addition, a self-attention network is proposed to process the
subgraphs and get the entity representations, which are used to replace the
entity embeddings in conventional methods. The proposed method achieves the
best results on the ogbl-wikikg2 dataset, which validates the effectiveness of
it. The code is now available at https://github.com/hzli-ucas/StarGraph
Related papers
- Differential Encoding for Improved Representation Learning over Graphs [15.791455338513815]
A message-passing paradigm and a global attention mechanism fundamentally generate node embeddings.
It is unknown if the dominant information is from a node itself or from the node's neighbours.
We present a differential encoding method to address the issue of information lost.
arXiv Detail & Related papers (2024-07-03T02:23:33Z) - Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - MGNet: Learning Correspondences via Multiple Graphs [78.0117352211091]
Learning correspondences aims to find correct correspondences from the initial correspondence set with an uneven correspondence distribution and a low inlier rate.
Recent advances usually use graph neural networks (GNNs) to build a single type of graph or stack local graphs into the global one to complete the task.
We propose MGNet to effectively combine multiple complementary graphs.
arXiv Detail & Related papers (2024-01-10T07:58:44Z) - Local Structure-aware Graph Contrastive Representation Learning [12.554113138406688]
We propose a Local Structure-aware Graph Contrastive representation Learning method (LS-GCL) to model the structural information of nodes from multiple views.
For the local view, the semantic subgraph of each target node is input into a shared GNN encoder to obtain the target node embeddings at the subgraph-level.
For the global view, considering the original graph preserves indispensable semantic information of nodes, we leverage the shared GNN encoder to learn the target node embeddings at the global graph-level.
arXiv Detail & Related papers (2023-08-07T03:23:46Z) - SHGNN: Structure-Aware Heterogeneous Graph Neural Network [77.78459918119536]
This paper proposes a novel Structure-Aware Heterogeneous Graph Neural Network (SHGNN) to address the above limitations.
We first utilize a feature propagation module to capture the local structure information of intermediate nodes in the meta-path.
Next, we use a tree-attention aggregator to incorporate the graph structure information into the aggregation module on the meta-path.
Finally, we leverage a meta-path aggregator to fuse the information aggregated from different meta-paths.
arXiv Detail & Related papers (2021-12-12T14:18:18Z) - Graph Transplant: Node Saliency-Guided Graph Mixup with Local Structure
Preservation [27.215800308343322]
We present the first Mixup-like graph augmentation method at the graph-level called Graph Transplant.
Our method identifies the sub-structure as a mix unit that can preserve the local information.
We extensively validate our method with diverse GNN architectures on multiple graph classification benchmark datasets.
arXiv Detail & Related papers (2021-11-10T11:10:13Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.