STaR: Knowledge Graph Embedding by Scaling, Translation and Rotation
- URL: http://arxiv.org/abs/2202.07130v1
- Date: Tue, 15 Feb 2022 02:06:22 GMT
- Title: STaR: Knowledge Graph Embedding by Scaling, Translation and Rotation
- Authors: Jiayi Li, Yujiu Yang
- Abstract summary: Bilinear method is mainstream in Knowledge Graph Embedding (KGE), aiming to learn low-dimensional representations for entities and relations.
Previous works have mainly discovered 6 important patterns like non-commutativity.
We propose a corresponding bilinear model Scaling Translation and Rotation (STaR) consisting of the above two parts.
- Score: 20.297699026433065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The bilinear method is mainstream in Knowledge Graph Embedding (KGE), aiming
to learn low-dimensional representations for entities and relations in
Knowledge Graph (KG) and complete missing links. Most of the existing works are
to find patterns between relationships and effectively model them to accomplish
this task. Previous works have mainly discovered 6 important patterns like
non-commutativity. Although some bilinear methods succeed in modeling these
patterns, they neglect to handle 1-to-N, N-to-1, and N-to-N relations (or
complex relations) concurrently, which hurts their expressiveness. To this end,
we integrate scaling, the combination of translation and rotation that can
solve complex relations and patterns, respectively, where scaling is a
simplification of projection. Therefore, we propose a corresponding bilinear
model Scaling Translation and Rotation (STaR) consisting of the above two
parts. Besides, since translation cannot be incorporated into the bilinear
model directly, we introduce translation matrix as the equivalent. Theoretical
analysis proves that STaR is capable of modeling all patterns and handling
complex relations simultaneously, and experiments demonstrate its effectiveness
on commonly used benchmarks for link prediction.
Related papers
- Graph-Dictionary Signal Model for Sparse Representations of Multivariate Data [49.77103348208835]
We define a novel Graph-Dictionary signal model, where a finite set of graphs characterizes relationships in data distribution through a weighted sum of their Laplacians.
We propose a framework to infer the graph dictionary representation from observed data, along with a bilinear generalization of the primal-dual splitting algorithm to solve the learning problem.
We exploit graph-dictionary representations in a motor imagery decoding task on brain activity data, where we classify imagined motion better than standard methods.
arXiv Detail & Related papers (2024-11-08T17:40:43Z) - Learning Complete Topology-Aware Correlations Between Relations for Inductive Link Prediction [121.65152276851619]
We show that semantic correlations between relations are inherently edge-level and entity-independent.
We propose a novel subgraph-based method, namely TACO, to model Topology-Aware COrrelations between relations.
To further exploit the potential of RCN, we propose Complete Common Neighbor induced subgraph.
arXiv Detail & Related papers (2023-09-20T08:11:58Z) - Linearity of Relation Decoding in Transformer Language Models [82.47019600662874]
Much of the knowledge encoded in transformer language models (LMs) may be expressed in terms of relations.
We show that, for a subset of relations, this computation is well-approximated by a single linear transformation on the subject representation.
arXiv Detail & Related papers (2023-08-17T17:59:19Z) - ProjB: An Improved Bilinear Biased ProjE model for Knowledge Graph
Completion [1.5576879053213302]
This work improves on ProjE KGE due to low computational complexity and high potential for model improvement.
Experimental results on benchmark Knowledge Graphs (KGs) such as FB15K and WN18 show that the proposed approach outperforms the state-of-the-art models in entity prediction task.
arXiv Detail & Related papers (2022-08-15T18:18:05Z) - TranS: Transition-based Knowledge Graph Embedding with Synthetic
Relation Representation [14.759663752868487]
We propose a novel transition-based method, TranS, for knowledge graph embedding.
The single relation vector in traditional scoring patterns is replaced with synthetic relation representation, which can solve these issues effectively and efficiently.
Experiments on a large knowledge graph dataset, ogbl-wikikg2, show that our model achieves state-of-the-art results.
arXiv Detail & Related papers (2022-04-18T16:55:25Z) - Learning Representations of Entities and Relations [0.0]
This thesis focuses on improving knowledge graph representation with the aim of tackling the link prediction task.
The first contribution is HypER, a convolutional model which simplifies and improves upon the link prediction performance.
The second contribution is TuckER, a relatively straightforward linear model, which, at the time of its introduction, obtained state-of-the-art link prediction performance.
The third contribution is MuRP, first multi-relational graph representation model embedded in hyperbolic space.
arXiv Detail & Related papers (2022-01-31T09:24:43Z) - RatE: Relation-Adaptive Translating Embedding for Knowledge Graph
Completion [51.64061146389754]
We propose a relation-adaptive translation function built upon a novel weighted product in complex space.
We then present our Relation-adaptive translating Embedding (RatE) approach to score each graph triple.
arXiv Detail & Related papers (2020-10-10T01:30:30Z) - DensE: An Enhanced Non-commutative Representation for Knowledge Graph
Embedding with Adaptive Semantic Hierarchy [4.607120217372668]
We develop a novel knowledge graph embedding method, named DensE, to provide an improved modeling scheme for the complex composition patterns of relations.
Our method decomposes each relation into an SO(3) group-based rotation operator and a scaling operator in the three dimensional (3-D) Euclidean space.
Experimental results on multiple benchmark knowledge graphs show that DensE outperforms the current state-of-the-art models for missing link prediction.
arXiv Detail & Related papers (2020-08-11T06:45:50Z) - LineaRE: Simple but Powerful Knowledge Graph Embedding for Link
Prediction [7.0294164380111015]
We propose a novel embedding model, namely LineaRE, which is capable of modeling four connectivity patterns and four mapping properties.
Experimental results on multiple widely used real-world datasets show that the proposed LineaRE model significantly outperforms existing state-of-the-art models for link prediction tasks.
arXiv Detail & Related papers (2020-04-21T14:19:43Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z) - Learning Bijective Feature Maps for Linear ICA [73.85904548374575]
We show that existing probabilistic deep generative models (DGMs) which are tailor-made for image data, underperform on non-linear ICA tasks.
To address this, we propose a DGM which combines bijective feature maps with a linear ICA model to learn interpretable latent structures for high-dimensional data.
We create models that converge quickly, are easy to train, and achieve better unsupervised latent factor discovery than flow-based models, linear ICA, and Variational Autoencoders on images.
arXiv Detail & Related papers (2020-02-18T17:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.