Grasping the Essentials: Tailoring Large Language Models for Zero-Shot
Relation Extraction
- URL: http://arxiv.org/abs/2402.11142v1
- Date: Sat, 17 Feb 2024 00:20:06 GMT
- Title: Grasping the Essentials: Tailoring Large Language Models for Zero-Shot
Relation Extraction
- Authors: Sizhe Zhou, Yu Meng, Bowen Jin, Jiawei Han
- Abstract summary: Relation extraction (RE) aims to identify semantic relationships between entities mentioned in texts.
Few-shot learning settings may offer incomplete and biased supervision for understanding target relation semantics.
We study the definition only zero-shot RE setting where only relation definitions expressed in natural language are used to train a RE model.
- Score: 36.627683488532234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relation extraction (RE), a crucial task in NLP, aims to identify semantic
relationships between entities mentioned in texts. Despite significant
advancements in this field, existing models typically rely on extensive
annotated data for training, which can be both costly and time-consuming to
acquire. Moreover, these models often struggle to adapt to new or unseen
relationships. In contrast, few-shot learning settings, which aim to reduce
annotation requirements, may offer incomplete and biased supervision for
understanding target relation semantics, leading to degraded and unstable
performance. To provide the model with accurate and explicit descriptions of
the relations types and meanwhile minimize the annotation requirements, we
study the definition only zero-shot RE setting where only relation definitions
expressed in natural language are used to train a RE model. Motivated by the
strong synthetic data generation power of LLMs, we propose a framework REPaL
which consists of three stages: (1) We utilize LLMs to generate initial seed
instances based on relation definitions and an unlabeled corpora. (2) We
fine-tune a bidirectional Small Language Model (SLM) using these initial seeds
to learn the relations for the target domain. (3) We enhance pattern coverage
and mitigate bias resulting from the limited number of initial seeds by
incorporating feedback acquired from SLM's predictions on unlabeled corpora. To
accomplish this, we leverage the multi-turn conversation ability of LLMs to
generate new instances in follow-up dialogues. Experiments on two datasets show
REPaL achieves better zero-shot performance with large margins over baseline
methods.
Related papers
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - Unlocking the Potential of Model Merging for Low-Resource Languages [66.7716891808697]
Adapting large language models to new languages typically involves continual pre-training (CT) followed by supervised fine-tuning (SFT)
We propose model merging as an alternative for low-resource languages, combining models with distinct capabilities into a single model without additional training.
Experiments based on Llama-2-7B demonstrate that model merging effectively endows LLMs for low-resource languages with task-solving abilities, outperforming CT-then-SFT in scenarios with extremely scarce data.
arXiv Detail & Related papers (2024-07-04T15:14:17Z) - Relation Extraction with Fine-Tuned Large Language Models in Retrieval Augmented Generation Frameworks [0.0]
Relation Extraction (RE) is crucial for converting unstructured data into structured formats like Knowledge Graphs (KGs)
Recent studies leveraging pre-trained language models (PLMs) have shown significant success in this area.
This work explores the performance of fine-tuned LLMs and their integration into the Retrieval Augmented-based (RAG) RE approach.
arXiv Detail & Related papers (2024-06-20T21:27:57Z) - Factual Dialogue Summarization via Learning from Large Language Models [35.63037083806503]
Large language model (LLM)-based automatic text summarization models generate more factually consistent summaries.
We employ zero-shot learning to extract symbolic knowledge from LLMs, generating factually consistent (positive) and inconsistent (negative) summaries.
Our approach achieves better factual consistency while maintaining coherence, fluency, and relevance, as confirmed by various automatic evaluation metrics.
arXiv Detail & Related papers (2024-06-20T20:03:37Z) - TriSum: Learning Summarization Ability from Large Language Models with Structured Rationale [66.01943465390548]
We introduce TriSum, a framework for distilling large language models' text summarization abilities into a compact, local model.
Our method enhances local model performance on various benchmarks.
It also improves interpretability by providing insights into the summarization rationale.
arXiv Detail & Related papers (2024-03-15T14:36:38Z) - Exploiting Contextual Target Attributes for Target Sentiment
Classification [53.30511968323911]
Existing PTLM-based models for TSC can be categorized into two groups: 1) fine-tuning-based models that adopt PTLM as the context encoder; 2) prompting-based models that transfer the classification task to the text/word generation task.
We present a new perspective of leveraging PTLM for TSC: simultaneously leveraging the merits of both language modeling and explicit target-context interactions via contextual target attributes.
arXiv Detail & Related papers (2023-12-21T11:45:28Z) - Revisiting Large Language Models as Zero-shot Relation Extractors [8.953462875381888]
Relation extraction (RE) consistently involves a certain degree of labeled or unlabeled data even if under zero-shot setting.
Recent studies have shown that large language models (LLMs) transfer well to new tasks out-of-the-box simply given a natural language prompt.
This work focuses on the study of exploring LLMs as zero-shot relation extractors.
arXiv Detail & Related papers (2023-10-08T06:17:39Z) - Improving Open Information Extraction with Large Language Models: A
Study on Demonstration Uncertainty [52.72790059506241]
Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text.
Despite the potential of large language models (LLMs) like ChatGPT as a general task solver, they lag behind state-of-the-art (supervised) methods in OIE tasks.
arXiv Detail & Related papers (2023-09-07T01:35:24Z) - Continual Contrastive Finetuning Improves Low-Resource Relation
Extraction [34.76128090845668]
Relation extraction has been particularly challenging in low-resource scenarios and domains.
Recent literature has tackled low-resource RE by self-supervised learning.
We propose to pretrain and finetune the RE model using consistent objectives of contrastive learning.
arXiv Detail & Related papers (2022-12-21T07:30:22Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Improving Distantly Supervised Relation Extraction by Natural Language
Inference [9.181270251524866]
We propose a novel DSRE-NLI framework, which considers both distant supervision from existing knowledge bases and indirect supervision from pretrained language models for other tasks.
DSRE-NLI energizes an off-the-shelf natural language inference (NLI) engine with a semi-automatic relation verbalization (SARV) mechanism to provide indirect supervision.
With two simple and effective data consolidation strategies, the quality of training data is substantially improved.
arXiv Detail & Related papers (2022-07-31T02:48:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.