RatE: Relation-Adaptive Translating Embedding for Knowledge Graph
Completion
- URL: http://arxiv.org/abs/2010.04863v1
- Date: Sat, 10 Oct 2020 01:30:30 GMT
- Title: RatE: Relation-Adaptive Translating Embedding for Knowledge Graph
Completion
- Authors: Hao Huang, Guodong Long, Tao Shen, Jing Jiang, Chengqi Zhang
- Abstract summary: We propose a relation-adaptive translation function built upon a novel weighted product in complex space.
We then present our Relation-adaptive translating Embedding (RatE) approach to score each graph triple.
- Score: 51.64061146389754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many graph embedding approaches have been proposed for knowledge graph
completion via link prediction. Among those, translating embedding approaches
enjoy the advantages of light-weight structure, high efficiency and great
interpretability. Especially when extended to complex vector space, they show
the capability in handling various relation patterns including symmetry,
antisymmetry, inversion and composition. However, previous translating
embedding approaches defined in complex vector space suffer from two main
issues: 1) representing and modeling capacities of the model are limited by the
translation function with rigorous multiplication of two complex numbers; and
2) embedding ambiguity caused by one-to-many relations is not explicitly
alleviated. In this paper, we propose a relation-adaptive translation function
built upon a novel weighted product in complex space, where the weights are
learnable, relation-specific and independent to embedding size. The translation
function only requires eight more scalar parameters each relation, but improves
expressive power and alleviates embedding ambiguity problem. Based on the
function, we then present our Relation-adaptive translating Embedding (RatE)
approach to score each graph triple. Moreover, a novel negative sampling method
is proposed to utilize both prior knowledge and self-adversarial learning for
effective optimization. Experiments verify RatE achieves state-of-the-art
performance on four link prediction benchmarks.
Related papers
- Knowledge Composition using Task Vectors with Learned Anisotropic Scaling [51.4661186662329]
We introduce aTLAS, an algorithm that linearly combines parameter blocks with different learned coefficients, resulting in anisotropic scaling at the task vector level.
We show that such linear combinations explicitly exploit the low intrinsicity of pre-trained models, with only a few coefficients being the learnable parameters.
We demonstrate the effectiveness of our method in task arithmetic, few-shot recognition and test-time adaptation, with supervised or unsupervised objectives.
arXiv Detail & Related papers (2024-07-03T07:54:08Z) - Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction [19.019881161010474]
Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM)
Existing approaches therefore solve the problem in an indirect way: they fine-tune an LM to learn embeddings of the head and tail entities, and then predict the relationship from these entity embeddings.
Our hypothesis in this paper is that relation extraction models can be improved by capturing relationships in a more direct way.
arXiv Detail & Related papers (2023-12-18T09:58:19Z) - Disentangled Representation Learning with Transmitted Information Bottleneck [57.22757813140418]
We present textbfDisTIB (textbfTransmitted textbfInformation textbfBottleneck for textbfDisd representation learning), a novel objective that navigates the balance between information compression and preservation.
arXiv Detail & Related papers (2023-11-03T03:18:40Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - Mutual Exclusivity Training and Primitive Augmentation to Induce
Compositionality [84.94877848357896]
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models.
We analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias and the tendency to memorize whole examples.
We show substantial empirical improvements using standard sequence-to-sequence models on two widely-used compositionality datasets.
arXiv Detail & Related papers (2022-11-28T17:36:41Z) - ProjB: An Improved Bilinear Biased ProjE model for Knowledge Graph
Completion [1.5576879053213302]
This work improves on ProjE KGE due to low computational complexity and high potential for model improvement.
Experimental results on benchmark Knowledge Graphs (KGs) such as FB15K and WN18 show that the proposed approach outperforms the state-of-the-art models in entity prediction task.
arXiv Detail & Related papers (2022-08-15T18:18:05Z) - TransHER: Translating Knowledge Graph Embedding with Hyper-Ellipsoidal
Restriction [14.636054717485207]
We propose a novel score function TransHER for knowledge graph embedding.
Our model first maps entities onto two separate hyper-ellipsoids and then conducts a relation-specific translation on one of them.
Experimental results show that TransHER can achieve state-of-the-art performance and generalize to datasets in different domains and scales.
arXiv Detail & Related papers (2022-04-27T22:49:27Z) - STaR: Knowledge Graph Embedding by Scaling, Translation and Rotation [20.297699026433065]
Bilinear method is mainstream in Knowledge Graph Embedding (KGE), aiming to learn low-dimensional representations for entities and relations.
Previous works have mainly discovered 6 important patterns like non-commutativity.
We propose a corresponding bilinear model Scaling Translation and Rotation (STaR) consisting of the above two parts.
arXiv Detail & Related papers (2022-02-15T02:06:22Z) - PairRE: Knowledge Graph Embeddings via Paired Relation Vectors [24.311361524872257]
We propose PairRE, a model with paired vectors for each relation representation.
It is capable of encoding three important relation patterns, symmetry/antisymmetry, inverse and composition.
We set a new state-of-the-art on two knowledge graph datasets of the challenging Open Graph Benchmark.
arXiv Detail & Related papers (2020-11-07T16:09:03Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.