How Good are LLMs at Relation Extraction under Low-Resource Scenario? Comprehensive Evaluation
- URL: http://arxiv.org/abs/2406.11162v2
- Date: Wed, 26 Jun 2024 01:43:15 GMT
- Title: How Good are LLMs at Relation Extraction under Low-Resource Scenario? Comprehensive Evaluation
- Authors: Dawulie Jinensibieke, Mieradilijiang Maimaiti, Wentao Xiao, Yuanhang Zheng, Xiaobo Wang,
- Abstract summary: This paper constructs low-resource relation extraction datasets in 10 low-resource languages (LRLs) in three regions (Central Asia, Southeast Asia and Middle East)
The corpora are constructed by translating the original publicly available English RE datasets (NYT10, FewRel and CrossRE) using an effective multilingual machine translation.
Then, we use the language perplexity (PPL) to filter out the low-quality data from the translated datasets.
- Score: 7.151108031568037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relation Extraction (RE) serves as a crucial technology for transforming unstructured text into structured information, especially within the framework of Knowledge Graph development. Its importance is emphasized by its essential role in various downstream tasks. Besides the conventional RE methods which are based on neural networks and pre-trained language models, large language models (LLMs) are also utilized in the research field of RE. However, on low-resource languages (LRLs), both conventional RE methods and LLM-based methods perform poorly on RE due to the data scarcity issues. To this end, this paper constructs low-resource relation extraction datasets in 10 LRLs in three regions (Central Asia, Southeast Asia and Middle East). The corpora are constructed by translating the original publicly available English RE datasets (NYT10, FewRel and CrossRE) using an effective multilingual machine translation. Then, we use the language perplexity (PPL) to filter out the low-quality data from the translated datasets. Finally, we conduct an empirical study and validate the performance of several open-source LLMs on these generated LRL RE datasets.
Related papers
- Think Carefully and Check Again! Meta-Generation Unlocking LLMs for Low-Resource Cross-Lingual Summarization [108.6908427615402]
Cross-lingual summarization ( CLS) aims to generate a summary for the source text in a different target language.
Currently, instruction-tuned large language models (LLMs) excel at various English tasks.
Recent studies have shown that LLMs' performance on CLS tasks remains unsatisfactory even with few-shot settings.
arXiv Detail & Related papers (2024-10-26T00:39:44Z) - Quality or Quantity? On Data Scale and Diversity in Adapting Large Language Models for Low-Resource Translation [62.202893186343935]
We explore what it would take to adapt Large Language Models for low-resource languages.
We show that parallel data is critical during both pre-training andSupervised Fine-Tuning (SFT)
Our experiments with three LLMs across two low-resourced language groups reveal consistent trends, underscoring the generalizability of our findings.
arXiv Detail & Related papers (2024-08-23T00:59:38Z) - Meta In-Context Learning Makes Large Language Models Better Zero and Few-Shot Relation Extractors [9.881102419679673]
textscMicre (textbfMeta textbfIn-textbfContext learning of LLMs for textbfRelation textbfExtraction) is a new meta-training framework for zero and few-shot Relation extraction.
We show that textscMicre can transfer the relation semantic knowledge via relation label name during inference on target RE datasets.
arXiv Detail & Related papers (2024-04-27T07:06:39Z) - Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Generation [128.01050030936028]
We propose an information refinement training method named InFO-RAG.
InFO-RAG is low-cost and general across various tasks.
It improves the performance of LLaMA2 by an average of 9.39% relative points.
arXiv Detail & Related papers (2024-02-28T08:24:38Z) - Small Language Model Is a Good Guide for Large Language Model in Chinese
Entity Relation Extraction [13.344709924683471]
In this paper, we propose SLCoLM, a model collaboration framework, to mitigate the data long-tail problem.
We use the textitTraining-Guide-Predict'' strategy to combine the strengths of pre-trained language models (PLMs) and large language models (LLMs)
Our experiments on a RE dataset rich in relation types show that the approach in this paper facilitates RE of long-tail relation types.
arXiv Detail & Related papers (2024-02-22T08:26:56Z) - ExaRanker-Open: Synthetic Explanation for IR using Open-Source LLMs [60.81649785463651]
We introduce ExaRanker-Open, where we adapt and explore the use of open-source language models to generate explanations.
Our findings reveal that incorporating explanations consistently enhances neural rankers, with benefits escalating as the LLM size increases.
arXiv Detail & Related papers (2024-02-09T11:23:14Z) - Synergistic Interplay between Search and Large Language Models for
Information Retrieval [141.18083677333848]
InteR allows RMs to expand knowledge in queries using LLM-generated knowledge collections.
InteR achieves overall superior zero-shot retrieval performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-12T11:58:15Z) - In-Context Retrieval-Augmented Language Models [28.23702459322163]
We show that In-Context RALM builds on off-the-shelf general purpose retrievers to provide surprisingly large LM gains across model sizes and diverse corpora.
We conclude that In-Context RALM has considerable potential to increase the prevalence of LM grounding.
arXiv Detail & Related papers (2023-01-31T20:26:16Z) - Towards Realistic Low-resource Relation Extraction: A Benchmark with
Empirical Baseline Study [51.33182775762785]
This paper presents an empirical study to build relation extraction systems in low-resource settings.
We investigate three schemes to evaluate the performance in low-resource settings: (i) different types of prompt-based methods with few-shot labeled data; (ii) diverse balancing methods to address the long-tailed distribution issue; and (iii) data augmentation technologies and self-training to generate more labeled in-domain data.
arXiv Detail & Related papers (2022-10-19T15:46:37Z) - Cross-Lingual Relation Extraction with Transformers [10.03287972980716]
We propose a cross-lingual Relation extraction (RE) approach that does not require any human annotation in a target language or cross-lingual resources.
We develop deep Transformer based RE models with a novel encoding scheme that can effectively encode both entity location and entity type information.
Our models, when trained with English data, outperform several deep neural network based English RE models.
arXiv Detail & Related papers (2020-10-16T22:23:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.