Inductive Entity Representations from Text via Link Prediction
- URL: http://arxiv.org/abs/2010.03496v3
- Date: Wed, 14 Apr 2021 09:38:45 GMT
- Title: Inductive Entity Representations from Text via Link Prediction
- Authors: Daniel Daza, Michael Cochez, Paul Groth
- Abstract summary: We propose a holistic evaluation protocol for entity representations learned via a link prediction objective.
We consider the inductive link prediction and entity classification tasks.
We also consider an information retrieval task for entity-oriented search.
- Score: 4.980304226944612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graphs (KG) are of vital importance for multiple applications on
the web, including information retrieval, recommender systems, and metadata
annotation. Regardless of whether they are built manually by domain experts or
with automatic pipelines, KGs are often incomplete. Recent work has begun to
explore the use of textual descriptions available in knowledge graphs to learn
vector representations of entities in order to preform link prediction.
However, the extent to which these representations learned for link prediction
generalize to other tasks is unclear. This is important given the cost of
learning such representations. Ideally, we would prefer representations that do
not need to be trained again when transferring to a different task, while
retaining reasonable performance.
In this work, we propose a holistic evaluation protocol for entity
representations learned via a link prediction objective. We consider the
inductive link prediction and entity classification tasks, which involve
entities not seen during training. We also consider an information retrieval
task for entity-oriented search. We evaluate an architecture based on a
pretrained language model, that exhibits strong generalization to entities not
observed during training, and outperforms related state-of-the-art methods (22%
MRR improvement in link prediction on average). We further provide evidence
that the learned representations transfer well to other tasks without
fine-tuning. In the entity classification task we obtain an average improvement
of 16% in accuracy compared with baselines that also employ pre-trained models.
In the information retrieval task, we obtain significant improvements of up to
8.8% in NDCG@10 for natural language queries. We thus show that the learned
representations are not limited KG-specific tasks, and have greater
generalization properties than evaluated in previous work.
Related papers
- Exploiting Contextual Uncertainty of Visual Data for Efficient Training of Deep Models [0.65268245109828]
We introduce the notion of contextual diversity for active learning CDAL.
We propose a data repair algorithm to curate contextually fair data to reduce model bias.
We are working on developing image retrieval system for wildlife camera trap images and reliable warning system for poor quality rural roads.
arXiv Detail & Related papers (2024-11-04T09:43:33Z) - IntCoOp: Interpretability-Aware Vision-Language Prompt Tuning [94.52149969720712]
IntCoOp learns to jointly align attribute-level inductive biases and class embeddings during prompt-tuning.
IntCoOp improves CoOp by 7.35% in average performance across 10 diverse datasets.
arXiv Detail & Related papers (2024-06-19T16:37:31Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - The Trade-off between Universality and Label Efficiency of
Representations from Contrastive Learning [32.15608637930748]
We show that there exists a trade-off between the two desiderata so that one may not be able to achieve both simultaneously.
We provide analysis using a theoretical data model and show that, while more diverse pre-training data result in more diverse features for different tasks, it puts less emphasis on task-specific features.
arXiv Detail & Related papers (2023-02-28T22:14:33Z) - Joint Representations of Text and Knowledge Graphs for Retrieval and
Evaluation [15.55971302563369]
A key feature of neural models is that they can produce semantic vector representations of objects (texts, images, speech, etc.) ensuring that similar objects are close to each other in the vector space.
While much work has focused on learning representations for other modalities, there are no aligned cross-modal representations for text and knowledge base elements.
arXiv Detail & Related papers (2023-02-28T17:39:43Z) - Incorporating Relevance Feedback for Information-Seeking Retrieval using
Few-Shot Document Re-Ranking [56.80065604034095]
We introduce a kNN approach that re-ranks documents based on their similarity with the query and the documents the user considers relevant.
To evaluate our different integration strategies, we transform four existing information retrieval datasets into the relevance feedback scenario.
arXiv Detail & Related papers (2022-10-19T16:19:37Z) - Supporting Vision-Language Model Inference with Confounder-pruning Knowledge Prompt [71.77504700496004]
Vision-language models are pre-trained by aligning image-text pairs in a common space to deal with open-set visual concepts.
To boost the transferability of the pre-trained models, recent works adopt fixed or learnable prompts.
However, how and what prompts can improve inference performance remains unclear.
arXiv Detail & Related papers (2022-05-23T07:51:15Z) - Improving Knowledge Graph Representation Learning by Structure
Contextual Pre-training [9.70121995251553]
We propose a novel pre-training-then-fine-tuning framework for knowledge graph representation learning.
A KG model is pre-trained with triple classification task, followed by discriminative fine-tuning on specific downstream tasks.
Experimental results demonstrate that fine-tuning SCoP not only outperforms results of baselines on a portfolio of downstream tasks but also avoids tedious task-specific model design and parameter training.
arXiv Detail & Related papers (2021-12-08T02:50:54Z) - Adaptive Attentional Network for Few-Shot Knowledge Graph Completion [16.722373937828117]
Few-shot Knowledge Graph (KG) completion is a focus of current research, where each task aims at querying unseen facts of a relation given its few-shot reference entity pairs.
Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties.
This work proposes an adaptive attentional network for few-shot KG completion by learning adaptive entity and reference representations.
arXiv Detail & Related papers (2020-10-19T16:27:48Z) - Predicting What You Already Know Helps: Provable Self-Supervised
Learning [60.27658820909876]
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks) without requiring labeled data.
We show a mechanism exploiting the statistical connections between certain em reconstruction-based pretext tasks that guarantee to learn a good representation.
We prove the linear layer yields small approximation error even for complex ground truth function class.
arXiv Detail & Related papers (2020-08-03T17:56:13Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.