ProjB: An Improved Bilinear Biased ProjE model for Knowledge Graph
Completion
- URL: http://arxiv.org/abs/2209.02390v1
- Date: Mon, 15 Aug 2022 18:18:05 GMT
- Title: ProjB: An Improved Bilinear Biased ProjE model for Knowledge Graph
Completion
- Authors: Mojtaba Moattari, Sahar Vahdati, Farhana Zulkernine
- Abstract summary: This work improves on ProjE KGE due to low computational complexity and high potential for model improvement.
Experimental results on benchmark Knowledge Graphs (KGs) such as FB15K and WN18 show that the proposed approach outperforms the state-of-the-art models in entity prediction task.
- Score: 1.5576879053213302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graph Embedding (KGE) methods have gained enormous attention from a
wide range of AI communities including Natural Language Processing (NLP) for
text generation, classification and context induction. Embedding a huge number
of inter-relationships in terms of a small number of dimensions, require proper
modeling in both cognitive and computational aspects. Recently, numerous
objective functions regarding cognitive and computational aspects of natural
languages are developed. Among which are the state-of-the-art methods of
linearity, bilinearity, manifold-preserving kernels, projection-subspace, and
analogical inference. However, the major challenge of such models lies in their
loss functions that associate the dimension of relation embeddings to
corresponding entity dimension. This leads to inaccurate prediction of
corresponding relations among entities when counterparts are estimated wrongly.
ProjE KGE, published by Bordes et al., due to low computational complexity and
high potential for model improvement, is improved in this work regarding all
translative and bilinear interactions while capturing entity nonlinearity.
Experimental results on benchmark Knowledge Graphs (KGs) such as FB15K and WN18
show that the proposed approach outperforms the state-of-the-art models in
entity prediction task using linear and bilinear methods and other recent
powerful ones. In addition, a parallel processing structure is proposed for the
model in order to improve the scalability on large KGs. The effects of
different adaptive clustering and newly proposed sampling approaches are also
explained which prove to be effective in improving the accuracy of knowledge
graph completion.
Related papers
- Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - ConvD: Attention Enhanced Dynamic Convolutional Embeddings for Knowledge
Graph Completion [11.223893397502431]
We propose a novel dynamic convolutional embedding model ConvD for knowledge graph completion.
Our proposed model consistently outperforms the state-of-the-art baseline methods.
arXiv Detail & Related papers (2023-12-11T07:37:58Z) - Location Sensitive Embedding for Knowledge Graph Reasoning [0.0]
Key challenge in translational distance models is their inability to effectively differentiate between 'head' and 'tail' entities in graphs.
To address this problem, a novel location-sensitive embedding (LSE) method has been developed.
LSE innovatively modifies the head entity using relation-specific mappings, conceptualizing relations as linear transformations rather than mere translations.
Experiments conducted on four large-scale KG datasets for link prediction show LSEd either outperforms or is competitive with state-of-the-art related works.
arXiv Detail & Related papers (2023-12-01T22:35:19Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Proximal Symmetric Non-negative Latent Factor Analysis: A Novel Approach
to Highly-Accurate Representation of Undirected Weighted Networks [2.1797442801107056]
Undirected Weighted Network (UWN) is commonly found in big data-related applications.
Existing models fail in either modeling its intrinsic symmetry or low-data density.
Proximal Symmetric Nonnegative Latent-factor-analysis model is proposed.
arXiv Detail & Related papers (2023-06-06T13:03:24Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - STaR: Knowledge Graph Embedding by Scaling, Translation and Rotation [20.297699026433065]
Bilinear method is mainstream in Knowledge Graph Embedding (KGE), aiming to learn low-dimensional representations for entities and relations.
Previous works have mainly discovered 6 important patterns like non-commutativity.
We propose a corresponding bilinear model Scaling Translation and Rotation (STaR) consisting of the above two parts.
arXiv Detail & Related papers (2022-02-15T02:06:22Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - RatE: Relation-Adaptive Translating Embedding for Knowledge Graph
Completion [51.64061146389754]
We propose a relation-adaptive translation function built upon a novel weighted product in complex space.
We then present our Relation-adaptive translating Embedding (RatE) approach to score each graph triple.
arXiv Detail & Related papers (2020-10-10T01:30:30Z) - LowFER: Low-rank Bilinear Pooling for Link Prediction [4.110108749051657]
We propose a factorized bilinear pooling model, commonly used in multi-modal learning, for better fusion of entities and relations.
Our model naturally generalizes decomposition Tucker based TuckER model, which has been shown to generalize other models.
We evaluate on real-world datasets, reaching on par or state-of-the-art performance.
arXiv Detail & Related papers (2020-08-25T07:33:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.