A RelEntLess Benchmark for Modelling Graded Relations between Named
Entities
- URL: http://arxiv.org/abs/2305.15002v2
- Date: Wed, 31 Jan 2024 03:56:22 GMT
- Title: A RelEntLess Benchmark for Modelling Graded Relations between Named
Entities
- Authors: Asahi Ushio and Jose Camacho Collados and Steven Schockaert
- Abstract summary: We introduce a new benchmark, in which entity pairs have to be ranked according to how much they satisfy a given graded relation.
We find a strong correlation between model size and performance, with smaller Language Models struggling to outperform a naive baseline.
The results of the largest Flan-T5 and OPT models are remarkably strong, although a clear gap with human performance remains.
- Score: 29.528217625083546
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Relations such as "is influenced by", "is known for" or "is a competitor of"
are inherently graded: we can rank entity pairs based on how well they satisfy
these relations, but it is hard to draw a line between those pairs that satisfy
them and those that do not. Such graded relations play a central role in many
applications, yet they are typically not covered by existing Knowledge Graphs.
In this paper, we consider the possibility of using Large Language Models
(LLMs) to fill this gap. To this end, we introduce a new benchmark, in which
entity pairs have to be ranked according to how much they satisfy a given
graded relation. The task is formulated as a few-shot ranking problem, where
models only have access to a description of the relation and five prototypical
instances. We use the proposed benchmark to evaluate state-of-the-art relation
embedding strategies as well as several recent LLMs, covering both publicly
available LLMs and closed models such as GPT-4. Overall, we find a strong
correlation between model size and performance, with smaller Language Models
struggling to outperform a naive baseline. The results of the largest Flan-T5
and OPT models are remarkably strong, although a clear gap with human
performance remains.
Related papers
- Exploring Model Kinship for Merging Large Language Models [52.01652098827454]
We introduce model kinship, the degree of similarity or relatedness between Large Language Models.
We find that there is a certain relationship between model kinship and the performance gains after model merging.
We propose a new model merging strategy: Top-k Greedy Merging with Model Kinship, which can yield better performance on benchmark datasets.
arXiv Detail & Related papers (2024-10-16T14:29:29Z) - Efficient Document Ranking with Learnable Late Interactions [73.41976017860006]
Cross-Encoder (CE) and Dual-Encoder (DE) models are two fundamental approaches for query-document relevance in information retrieval.
To predict relevance, CE models use joint query-document embeddings, while DE models maintain factorized query and document embeddings.
Recently, late-interaction models have been proposed to realize more favorable latency-quality tradeoffs, by using a DE structure followed by a lightweight scorer.
arXiv Detail & Related papers (2024-06-25T22:50:48Z) - RelBERT: Embedding Relations with Language Models [29.528217625083546]
We propose to extract relation embeddings from relatively small language models.
RelBERT captures relational similarity in a surprisingly fine-grained way.
It is capable of modelling relations that go well beyond what the model has seen during training.
arXiv Detail & Related papers (2023-09-30T08:15:36Z) - A Comprehensive Study on Knowledge Graph Embedding over Relational
Patterns Based on Rule Learning [49.09125100268454]
Knowledge Graph Embedding (KGE) has proven to be an effective approach to solving the Knowledge Completion Graph (KGC) task.
Relational patterns are an important factor in the performance of KGE models.
We introduce a training-free method to enhance KGE models' performance over various relational patterns.
arXiv Detail & Related papers (2023-08-15T17:30:57Z) - RAILD: Towards Leveraging Relation Features for Inductive Link
Prediction In Knowledge Graphs [1.5469452301122175]
Relation Aware Inductive Link preDiction (RAILD) is proposed for Knowledge Graph completion.
RAILD learns representations for both unseen entities and unseen relations.
arXiv Detail & Related papers (2022-11-21T12:35:30Z) - Learning Robust Representations for Continual Relation Extraction via
Adversarial Class Augmentation [45.87125587600661]
Continual relation extraction (CRE) aims to continually learn new relations from a class-incremental data stream.
CRE model usually suffers from catastrophic forgetting problem, i.e., the performance of old relations seriously degrades when the model learns new relations.
To address this issue, we encourage the model to learn more precise and robust representations through a simple yet effective adversarial class augmentation mechanism.
arXiv Detail & Related papers (2022-10-10T08:50:48Z) - Relational Learning with Gated and Attentive Neighbor Aggregator for
Few-Shot Knowledge Graph Completion [33.59045268013895]
We propose a few-shot relational learning with global-local framework to address the above issues.
For the local stage, a meta-learning based TransH method is designed to model complex relations and train our model in a few-shot learning fashion.
Our model achieves 5-shot FKGC performance improvements of 8.0% on NELL-One and 2.8% on Wiki-One by the metric Hits@10.
arXiv Detail & Related papers (2021-04-27T10:38:44Z) - Learning Relation Prototype from Unlabeled Texts for Long-tail Relation
Extraction [84.64435075778988]
We propose a general approach to learn relation prototypes from unlabeled texts.
We learn relation prototypes as an implicit factor between entities.
We conduct experiments on two publicly available datasets: New York Times and Google Distant Supervision.
arXiv Detail & Related papers (2020-11-27T06:21:12Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Type-augmented Relation Prediction in Knowledge Graphs [65.88395564516115]
We propose a type-augmented relation prediction (TaRP) method, where we apply both the type information and instance-level information for relation prediction.
Our proposed TaRP method achieves significantly better performance than state-of-the-art methods on four benchmark datasets.
arXiv Detail & Related papers (2020-09-16T21:14:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.