Highly Efficient Knowledge Graph Embedding Learning with Orthogonal
Procrustes Analysis
- URL: http://arxiv.org/abs/2104.04676v1
- Date: Sat, 10 Apr 2021 03:55:45 GMT
- Title: Highly Efficient Knowledge Graph Embedding Learning with Orthogonal
Procrustes Analysis
- Authors: Xutan Peng, Guanyi Chen, Chenghua Lin, Mark Stevenson
- Abstract summary: Knowledge Graph Embeddings (KGEs) have been intensively explored in recent years due to their promise for a wide range of applications.
This paper proposes a simple yet effective KGE framework which can reduce the training time and carbon footprint by orders of magnitudes.
- Score: 10.154836127889487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graph Embeddings (KGEs) have been intensively explored in recent
years due to their promise for a wide range of applications. However, existing
studies focus on improving the final model performance without acknowledging
the computational cost of the proposed approaches, in terms of execution time
and environmental impact. This paper proposes a simple yet effective KGE
framework which can reduce the training time and carbon footprint by orders of
magnitudes compared with state-of-the-art approaches, while producing
competitive performance. We highlight three technical innovations: full batch
learning via relational matrices, closed-form Orthogonal Procrustes Analysis
for KGEs, and non-negative-sampling training. In addition, as the first KGE
method whose entity embeddings also store full relation information, our
trained models encode rich semantics and are highly interpretable.
Comprehensive experiments and ablation studies involving 13 strong baselines
and two standard datasets verify the effectiveness and efficiency of our
algorithm.
Related papers
- CLCE: An Approach to Refining Cross-Entropy and Contrastive Learning for
Optimized Learning Fusion [16.00706418526691]
Cross-Entropy loss (CE) can compromise model generalization and stability.
We introduce a novel approach named CLCE, which integrates Contrastive Learning with CE.
We show that CLCE significantly outperforms CE in Top-1 accuracy across twelve benchmarks.
arXiv Detail & Related papers (2024-02-22T13:45:01Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - A Comprehensive Study on Knowledge Graph Embedding over Relational
Patterns Based on Rule Learning [49.09125100268454]
Knowledge Graph Embedding (KGE) has proven to be an effective approach to solving the Knowledge Completion Graph (KGC) task.
Relational patterns are an important factor in the performance of KGE models.
We introduce a training-free method to enhance KGE models' performance over various relational patterns.
arXiv Detail & Related papers (2023-08-15T17:30:57Z) - Deep Active Ensemble Sampling For Image Classification [8.31483061185317]
Active learning frameworks aim to reduce the cost of data annotation by actively requesting the labeling for the most informative data points.
Some proposed approaches include uncertainty-based techniques, geometric methods, implicit combination of uncertainty-based and geometric approaches.
We present an innovative integration of recent progress in both uncertainty-based and geometric frameworks to enable an efficient exploration/exploitation trade-off in sample selection strategy.
Our framework provides two advantages: (1) accurate posterior estimation, and (2) tune-able trade-off between computational overhead and higher accuracy.
arXiv Detail & Related papers (2022-10-11T20:20:20Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - ProjB: An Improved Bilinear Biased ProjE model for Knowledge Graph
Completion [1.5576879053213302]
This work improves on ProjE KGE due to low computational complexity and high potential for model improvement.
Experimental results on benchmark Knowledge Graphs (KGs) such as FB15K and WN18 show that the proposed approach outperforms the state-of-the-art models in entity prediction task.
arXiv Detail & Related papers (2022-08-15T18:18:05Z) - Start Small, Think Big: On Hyperparameter Optimization for Large-Scale
Knowledge Graph Embeddings [4.3400407844815]
We introduce an efficient multi-fidelity HPO algorithm for large-scale knowledge graphs.
GraSH obtains state-of-the-art results on large graphs at a low cost.
arXiv Detail & Related papers (2022-07-11T16:07:16Z) - Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding [20.49583906923656]
Confidence-aware Self-Knowledge Distillation learns from the model itself to enhance KGE in a low-dimensional space.
A specific semantic module is developed to filter reliable knowledge by estimating the confidence of previously learned embeddings.
arXiv Detail & Related papers (2022-06-07T01:49:22Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - DAGs with No Curl: An Efficient DAG Structure Learning Approach [62.885572432958504]
Recently directed acyclic graph (DAG) structure learning is formulated as a constrained continuous optimization problem with continuous acyclicity constraints.
We propose a novel learning framework to model and learn the weighted adjacency matrices in the DAG space directly.
We show that our method provides comparable accuracy but better efficiency than baseline DAG structure learning methods on both linear and generalized structural equation models.
arXiv Detail & Related papers (2021-06-14T07:11:36Z) - RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding [50.010601631982425]
This paper extends the random walk model (Arora et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs)
We derive a scoring function that evaluates the strength of a relation R between two entities h (head) and t (tail)
We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.
arXiv Detail & Related papers (2021-01-25T13:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.