Location Sensitive Embedding for Knowledge Graph Reasoning
- URL: http://arxiv.org/abs/2401.10893v3
- Date: Tue, 30 Jan 2024 03:14:11 GMT
- Title: Location Sensitive Embedding for Knowledge Graph Reasoning
- Authors: Deepak Banerjee, Anjali Ishaan
- Abstract summary: Key challenge in translational distance models is their inability to effectively differentiate between 'head' and 'tail' entities in graphs.
To address this problem, a novel location-sensitive embedding (LSE) method has been developed.
LSE innovatively modifies the head entity using relation-specific mappings, conceptualizing relations as linear transformations rather than mere translations.
Experiments conducted on four large-scale KG datasets for link prediction show LSEd either outperforms or is competitive with state-of-the-art related works.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Embedding methods transform the knowledge graph into a continuous,
low-dimensional space, facilitating inference and completion tasks. Existing
methods are mainly divided into two types: translational distance models and
semantic matching models. A key challenge in translational distance models is
their inability to effectively differentiate between 'head' and 'tail' entities
in graphs. To address this problem, a novel location-sensitive embedding (LSE)
method has been developed. LSE innovatively modifies the head entity using
relation-specific mappings, conceptualizing relations as linear transformations
rather than mere translations. The theoretical foundations of LSE, including
its representational capabilities and its connections to existing models, have
been thoroughly examined. A more streamlined variant, LSE-d, which employs a
diagonal matrix for transformations to enhance practical efficiency, is also
proposed. Experiments conducted on four large-scale KG datasets for link
prediction show that LSEd either outperforms or is competitive with
state-of-the-art related works.
Related papers
- Disentangled Representation Learning with Large Language Models for
Text-Attributed Graphs [57.052160123387104]
We present the Disentangled Graph-Text Learner (DGTL) model, which is able to enhance the reasoning and predicting capabilities of LLMs for TAGs.
Our proposed DGTL model incorporates graph structure information through tailored disentangled graph neural network (GNN) layers.
Experimental evaluations demonstrate the effectiveness of the proposed DGTL model on achieving superior or comparable performance over state-of-the-art baselines.
arXiv Detail & Related papers (2023-10-27T14:00:04Z) - Improve Transformer Pre-Training with Decoupled Directional Relative
Position Encoding and Representation Differentiations [23.2969212998404]
We revisit the Transformer-based pre-trained language models and identify two problems that may limit the expressiveness of the model.
Existing relative position encoding models confuse two heterogeneous information: relative distance and direction.
We propose two novel techniques to improve pre-trained language models.
arXiv Detail & Related papers (2022-10-09T12:35:04Z) - ProjB: An Improved Bilinear Biased ProjE model for Knowledge Graph
Completion [1.5576879053213302]
This work improves on ProjE KGE due to low computational complexity and high potential for model improvement.
Experimental results on benchmark Knowledge Graphs (KGs) such as FB15K and WN18 show that the proposed approach outperforms the state-of-the-art models in entity prediction task.
arXiv Detail & Related papers (2022-08-15T18:18:05Z) - TranS: Transition-based Knowledge Graph Embedding with Synthetic
Relation Representation [14.759663752868487]
We propose a novel transition-based method, TranS, for knowledge graph embedding.
The single relation vector in traditional scoring patterns is replaced with synthetic relation representation, which can solve these issues effectively and efficiently.
Experiments on a large knowledge graph dataset, ogbl-wikikg2, show that our model achieves state-of-the-art results.
arXiv Detail & Related papers (2022-04-18T16:55:25Z) - STaR: Knowledge Graph Embedding by Scaling, Translation and Rotation [20.297699026433065]
Bilinear method is mainstream in Knowledge Graph Embedding (KGE), aiming to learn low-dimensional representations for entities and relations.
Previous works have mainly discovered 6 important patterns like non-commutativity.
We propose a corresponding bilinear model Scaling Translation and Rotation (STaR) consisting of the above two parts.
arXiv Detail & Related papers (2022-02-15T02:06:22Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Semantic Correspondence with Transformers [68.37049687360705]
We propose Cost Aggregation with Transformers (CATs) to find dense correspondences between semantically similar images.
We include appearance affinity modelling to disambiguate the initial correlation maps and multi-level aggregation.
We conduct experiments to demonstrate the effectiveness of the proposed model over the latest methods and provide extensive ablation studies.
arXiv Detail & Related papers (2021-06-04T14:39:03Z) - Weakly supervised segmentation with cross-modality equivariant
constraints [7.757293476741071]
Weakly supervised learning has emerged as an appealing alternative to alleviate the need for large labeled datasets in semantic segmentation.
We present a novel learning strategy that leverages self-supervision in a multi-modal image scenario to significantly enhance original CAMs.
Our approach outperforms relevant recent literature under the same learning conditions.
arXiv Detail & Related papers (2021-04-06T13:14:20Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - RatE: Relation-Adaptive Translating Embedding for Knowledge Graph
Completion [51.64061146389754]
We propose a relation-adaptive translation function built upon a novel weighted product in complex space.
We then present our Relation-adaptive translating Embedding (RatE) approach to score each graph triple.
arXiv Detail & Related papers (2020-10-10T01:30:30Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.