SpaceE: Knowledge Graph Embedding by Relational Linear Transformation in
the Entity Space
- URL: http://arxiv.org/abs/2204.10245v1
- Date: Thu, 21 Apr 2022 16:26:20 GMT
- Title: SpaceE: Knowledge Graph Embedding by Relational Linear Transformation in
the Entity Space
- Authors: Jinxing Yu, Yunfeng Cai, Mingming Sun, Ping Li
- Abstract summary: In knowledge graphs, different entities may have a relation with the same entity.
Such a non-injective relation pattern cannot be well modeled by existing translation distance based KGE methods.
We propose a translation distance-based KGE method called SpaceE to model relations as linear transformations.
- Score: 29.298981273389217
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Translation distance based knowledge graph embedding (KGE) methods, such as
TransE and RotatE, model the relation in knowledge graphs as translation or
rotation in the vector space. Both translation and rotation are injective; that
is, the translation or rotation of different vectors results in different
results. In knowledge graphs, different entities may have a relation with the
same entity; for example, many actors starred in one movie. Such a
non-injective relation pattern cannot be well modeled by the translation or
rotation operations in existing translation distance based KGE methods. To
tackle the challenge, we propose a translation distance-based KGE method called
SpaceE to model relations as linear transformations. The proposed SpaceE embeds
both entities and relations in knowledge graphs as matrices and SpaceE
naturally models non-injective relations with singular linear transformations.
We theoretically demonstrate that SpaceE is a fully expressive model with the
ability to infer multiple desired relation patterns, including symmetry,
skew-symmetry, inversion, Abelian composition, and non-Abelian composition.
Experimental results on link prediction datasets illustrate that SpaceE
substantially outperforms many previous translation distance based knowledge
graph embedding methods, especially on datasets with many non-injective
relations. The code is available based on the PaddlePaddle deep learning
platform https://www.paddlepaddle.org.cn.
Related papers
- Linearity of Relation Decoding in Transformer Language Models [82.47019600662874]
Much of the knowledge encoded in transformer language models (LMs) may be expressed in terms of relations.
We show that, for a subset of relations, this computation is well-approximated by a single linear transformation on the subject representation.
arXiv Detail & Related papers (2023-08-17T17:59:19Z) - PolarMOT: How Far Can Geometric Relations Take Us in 3D Multi-Object
Tracking? [62.997667081978825]
We encode 3D detections as nodes in a graph, where spatial and temporal pairwise relations among objects are encoded via localized polar coordinates on graph edges.
This allows our graph neural network to learn to effectively encode temporal and spatial interactions.
We establish a new state-of-the-art on nuScenes dataset and, more importantly, show that our method, PolarMOT, generalizes remarkably well across different locations.
arXiv Detail & Related papers (2022-08-03T10:06:56Z) - HDGT: Heterogeneous Driving Graph Transformer for Multi-Agent Trajectory
Prediction via Scene Encoding [76.9165845362574]
We propose a backbone modelling the driving scene as a heterogeneous graph with different types of nodes and edges.
For spatial relation encoding, the coordinates of the node as well as its in-edges are in the local node-centric coordinate system.
Experimental results show that HDGT achieves state-of-the-art performance for the task of trajectory prediction.
arXiv Detail & Related papers (2022-04-30T07:08:30Z) - TransHER: Translating Knowledge Graph Embedding with Hyper-Ellipsoidal
Restriction [14.636054717485207]
We propose a novel score function TransHER for knowledge graph embedding.
Our model first maps entities onto two separate hyper-ellipsoids and then conducts a relation-specific translation on one of them.
Experimental results show that TransHER can achieve state-of-the-art performance and generalize to datasets in different domains and scales.
arXiv Detail & Related papers (2022-04-27T22:49:27Z) - STaR: Knowledge Graph Embedding by Scaling, Translation and Rotation [20.297699026433065]
Bilinear method is mainstream in Knowledge Graph Embedding (KGE), aiming to learn low-dimensional representations for entities and relations.
Previous works have mainly discovered 6 important patterns like non-commutativity.
We propose a corresponding bilinear model Scaling Translation and Rotation (STaR) consisting of the above two parts.
arXiv Detail & Related papers (2022-02-15T02:06:22Z) - BiQUE: Biquaternionic Embeddings of Knowledge Graphs [9.107095800991333]
Existing knowledge graph embeddings (KGEs) compactly encode multi-relational knowledge graphs (KGs)
It is crucial for KGE models to unify multiple geometric transformations so as to fully cover the multifarious relations in KGs.
We propose BiQUE, a novel model that employs biquaternions to integrate multiple geometric transformations.
arXiv Detail & Related papers (2021-09-29T13:05:32Z) - A Differential Geometry Perspective on Orthogonal Recurrent Models [56.09491978954866]
We employ tools and insights from differential geometry to offer a novel perspective on orthogonal RNNs.
We show that orthogonal RNNs may be viewed as optimizing in the space of divergence-free vector fields.
Motivated by this observation, we study a new recurrent model, which spans the entire space of vector fields.
arXiv Detail & Related papers (2021-02-18T19:39:22Z) - Motif Learning in Knowledge Graphs Using Trajectories Of Differential
Equations [14.279419014064047]
Knowledge Graph Embeddings (KGEs) have shown promising performance on link prediction tasks.
Many KGEs use the flat geometry which renders them incapable of preserving complex structures.
We propose a neuro differential KGE that embeds nodes of a KG on the trajectories of Ordinary Differential Equations (ODEs)
arXiv Detail & Related papers (2020-10-13T20:53:17Z) - RatE: Relation-Adaptive Translating Embedding for Knowledge Graph
Completion [51.64061146389754]
We propose a relation-adaptive translation function built upon a novel weighted product in complex space.
We then present our Relation-adaptive translating Embedding (RatE) approach to score each graph triple.
arXiv Detail & Related papers (2020-10-10T01:30:30Z) - 5* Knowledge Graph Embeddings with Projective Transformations [13.723120574076127]
We present a novel knowledge graph embedding model (5*E) in projective geometry.
It supports multiple simultaneous transformations - specifically inversion, reflection, translation, rotation, and homothety.
It outperforms existing approaches on the most widely used link prediction benchmarks.
arXiv Detail & Related papers (2020-06-08T23:28:07Z) - Learning Bijective Feature Maps for Linear ICA [73.85904548374575]
We show that existing probabilistic deep generative models (DGMs) which are tailor-made for image data, underperform on non-linear ICA tasks.
To address this, we propose a DGM which combines bijective feature maps with a linear ICA model to learn interpretable latent structures for high-dimensional data.
We create models that converge quickly, are easy to train, and achieve better unsupervised latent factor discovery than flow-based models, linear ICA, and Variational Autoencoders on images.
arXiv Detail & Related papers (2020-02-18T17:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.