To Copy Rather Than Memorize: A Vertical Learning Paradigm for Knowledge
Graph Completion
- URL: http://arxiv.org/abs/2305.14126v1
- Date: Tue, 23 May 2023 14:53:20 GMT
- Title: To Copy Rather Than Memorize: A Vertical Learning Paradigm for Knowledge
Graph Completion
- Authors: Rui Li, Xu Chen, Chaozhuo Li, Yanming Shen, Jianan Zhao, Yujing Wang,
Weihao Han, Hao Sun, Weiwei Deng, Qi Zhang, Xing Xie
- Abstract summary: We extend embedding models by allowing to explicitly copy target information from related factual triples for more accurate prediction.
We also propose a novel relative distance based negative sampling technique (ReD) for more effective optimization.
- Score: 35.05965140700747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Embedding models have shown great power in knowledge graph completion (KGC)
task. By learning structural constraints for each training triple, these
methods implicitly memorize intrinsic relation rules to infer missing links.
However, this paper points out that the multi-hop relation rules are hard to be
reliably memorized due to the inherent deficiencies of such implicit
memorization strategy, making embedding models underperform in predicting links
between distant entity pairs. To alleviate this problem, we present Vertical
Learning Paradigm (VLP), which extends embedding models by allowing to
explicitly copy target information from related factual triples for more
accurate prediction. Rather than solely relying on the implicit memory, VLP
directly provides additional cues to improve the generalization ability of
embedding models, especially making the distant link prediction significantly
easier. Moreover, we also propose a novel relative distance based negative
sampling technique (ReD) for more effective optimization. Experiments
demonstrate the validity and generality of our proposals on two standard
benchmarks. Our code is available at https://github.com/rui9812/VLP.
Related papers
- Improve Vision Language Model Chain-of-thought Reasoning [86.83335752119741]
Chain-of-thought (CoT) reasoning in vision language models (VLMs) is crucial for improving interpretability and trustworthiness.
We show that training VLM on short answers does not generalize well to reasoning tasks that require more detailed responses.
arXiv Detail & Related papers (2024-10-21T17:00:06Z) - Zero-Shot Class Unlearning in CLIP with Synthetic Samples [0.0]
We focus on unlearning within CLIP, a dual vision-language model trained on a massive dataset of image-text pairs.
We apply Lipschitz regularization to the multimodal context of CLIP.
Our forgetting procedure is iterative, where we track accuracy on a synthetic forget set and stop when accuracy falls below a chosen threshold.
arXiv Detail & Related papers (2024-07-10T09:16:14Z) - Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs [61.04246774006429]
We introduce a black-box prompt optimization method that uses an attacker LLM agent to uncover higher levels of memorization in a victim agent.
We observe that our instruction-based prompts generate outputs with 23.7% higher overlap with training data compared to the baseline prefix-suffix measurements.
Our findings show that instruction-tuned models can expose pre-training data as much as their base-models, if not more so, and using instructions proposed by other LLMs can open a new avenue of automated attacks.
arXiv Detail & Related papers (2024-03-05T19:32:01Z) - A Condensed Transition Graph Framework for Zero-shot Link Prediction
with Large Language Models [22.089751438495956]
We introduce a Condensed Transition Graph Framework for Zero-Shot Link Prediction (CTLP)
CTLP encodes all the paths' information in linear time complexity to predict unseen relations between entities.
Our proposed CTLP method achieves state-of-the-art performance on three standard ZSLP datasets.
arXiv Detail & Related papers (2024-02-16T16:02:33Z) - iMatching: Imperative Correspondence Learning [5.568520539073218]
We introduce a new self-supervised scheme, imperative learning (IL), for training feature correspondence.
It enables correspondence learning on arbitrary uninterrupted videos without any camera pose or depth labels.
We demonstrate superior performance on tasks including feature matching and pose estimation.
arXiv Detail & Related papers (2023-12-04T18:58:20Z) - Match me if you can: Semi-Supervised Semantic Correspondence Learning with Unpaired Images [76.47980643420375]
This paper builds on the hypothesis that there is an inherent data-hungry matter in learning semantic correspondences.
We demonstrate a simple machine annotator reliably enriches paired key points via machine supervision.
Our models surpass current state-of-the-art models on semantic correspondence learning benchmarks like SPair-71k, PF-PASCAL, and PF-WILLOW.
arXiv Detail & Related papers (2023-11-30T13:22:15Z) - Phantom Embeddings: Using Embedding Space for Model Regularization in
Deep Neural Networks [12.293294756969477]
The strength of machine learning models stems from their ability to learn complex function approximations from data.
The complex models tend to memorize the training data, which results in poor regularization performance on test data.
We present a novel approach to regularize the models by leveraging the information-rich latent embeddings and their high intra-class correlation.
arXiv Detail & Related papers (2023-04-14T17:15:54Z) - Transductive Data Augmentation with Relational Path Rule Mining for
Knowledge Graph Embedding [5.603379389073144]
We propose transductive data augmentation by relation path rules and confidence-based weighting of augmented data.
The results and analysis show that our proposed method effectively improves the performance of the embedding model by augmenting data that include true answers or entities similar to them.
arXiv Detail & Related papers (2021-11-01T14:35:14Z) - Remembering for the Right Reasons: Explanations Reduce Catastrophic
Forgetting [100.75479161884935]
We propose a novel training paradigm called Remembering for the Right Reasons (RRR)
RRR stores visual model explanations for each example in the buffer and ensures the model has "the right reasons" for its predictions.
We demonstrate how RRR can be easily added to any memory or regularization-based approach and results in reduced forgetting.
arXiv Detail & Related papers (2020-10-04T10:05:27Z) - Learning Reasoning Strategies in End-to-End Differentiable Proving [50.9791149533921]
Conditional Theorem Provers learn optimal rule selection strategy via gradient-based optimisation.
We show that Conditional Theorem Provers are scalable and yield state-of-the-art results on the CLUTRR dataset.
arXiv Detail & Related papers (2020-07-13T16:22:14Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.